Efficient utilization of renewable energy sources, such as solar energy, is crucial for achieving sustainable development goals. As solar energy production varies in time and space depending on weather conditions, how to combine it with distributed energy storage and exchange systems with intelligent control is an important research issue. In this project, I explore the use of reinforcement learning (RL) for adaptive control of energy storage in local batteries and energy sharing through energy grids. I first test multiple RL algorithms for energy storage control of single houses. I then extend the Autonomous Power Interchange System (APIS) from SONY to combine it with reinforcement learning algorithms in each house. I consider different design decisions in applying RL: whether to use centralized or distributed control, at what level of detail actions should be learned, what information is used by each agent, and how much information is shared across agents. Based on these considerations, I implemented deep Q-network (DQN) and prioritized DQN to set the parameters of real-time energy exchange protocol of APIS and tested it using the actual data collected from OIST DC-based Open Energy System (DCOES). The simulation results showed that DQN agents outperform rule-based control on energy sharing and that prioritized experience replay further improves the performance of DQN. Simulation results also suggest that sharing average energy production, storage and usage within the community helps the performance. The results contribute to future designs of distributed intelligent agents and effective operations of energy grid systems.