We consider a system of UAVs, depots, service stations and tasks in a stochastic environment. Our goal is to jointly determine the system resources (system design), task allocation and waypoint selection. To our knowledge, none have studied this joint decision problem in the stochastic context. We formulate the problem as a Markov decision process (MDP) and resort to deep reinforcement learning (DRL) to obtain state-based decisions. Numerical studies are conducted to assess the performance of the proposed approach. In small examples for which an optimal policy can be found, the DRL based approach is much faster than value iteration and obtained nearly optimal solutions. In large examples, the DRL based approach can find efficient designs and policies.