Despite Alan Turing’s fear in 1951 that ‘once the machine thinking method has started at some stage we will have to expect the machine will take control’, or Norbert Wiener’s prediction in the 1960s, ‘we know that for a long time everything we do will be nothing more than the jumping off point for those who have the advantage of already being aware of our ultimate results’, strong “Artificial Intelligence” has triggered only recently a polarised debate in several domains of science, together with the inevitable seduction of the general public’s imagination. Particularly the algorithmic dance on oligopolistic social media and e-commerce platforms has been at the forefront of “hacking democracy” assertions: responsible for fake news and disinformation architectures, symptomatic populism-radicalism-violent extremism, reproducing gender, race,class and other bias in employment, health, education, as well as digital labor and gig economy problematics on the future of work and intensification, hence giving birth to new concerns of data justice,tech giant whistle-blowing, digital rights, data inequality and environmental impact of computation, even resistance movements to any Intelligent Machines whatsoever. Cutting through this hyperbolic, yet partially justified fog, my talk will engage with the problem of loss and seduction: Learning from machines comes with a sense of loss, not because of losing the uniqueness of being human, but because internal human temporalisation accelerates in sync with machines in ways humanity cannot yet understand. Moreover, we understand still little about how babies and children are learning and we have made little progress on how we experience human consciousness. To evidence this argument, my talk will rely on empirical snapshots to argue for a de-translation and re-translation of what we might think we are learning from machines, and what we might want to be intervening and correcting in the process of machines learning, by ‘becoming more certain of the purpose with which we desire’ (Wiener 1961), in order to invent an AI future that includes everyone.