In this talk I will present the adjoint method –- a general technique of computing gradients of a function or a simulation. This method has applications in many fields such as optimization and machine learning. One example is the popular backpropagation procedure in deep learning. Both the theory behind the technique and the practical implementation details will be provided. An adjointed version of the presenter’s well known 100 lines of C-code fluid solver will be presented. Some examples of controlling rigid body simulations will also be shown. Novel applications of the continuous adjoint method in deep learning will also be mentioned in this talk.