Posted on Leave a comment

What is the Python Dunder Method for the โ€œnot andโ€ Operator?

Rate this post

In Python, “dunder” methods, short for “double underscore” methods, are special methods that allow developers to define the behavior of built-in operations for custom objects. For instance, when you use the + operator to add two objects, Python internally calls the __add__ method. Similarly, other operators have their corresponding dunder methods.

However, the term “not and” operator might be a bit misleading, as there isn’t a direct “not and” operator in Python.

Instead, Python provides individual operators for not, and and. But if we delve into the realm of bitwise operations, we find operators that might resemble this behavior: the bitwise NOT (~) and the bitwise AND (&).

Let’s explore the dunder methods associated with these operators.

Bitwise NOT (~) and its Dunder Method __invert__

The bitwise NOT operator flips the bits of a number. For a custom class, if you want to define or override the behavior of the ~ operator, you’d use the __invert__ method.

class BitwiseNumber: def __init__(self, value): self.value = value def __invert__(self): return BitwiseNumber(~self.value) def __repr__(self): return str(self.value) number = BitwiseNumber(5)
print(~number) # Outputs: -6

In the above example, the __invert__ method returns a new BitwiseNumber object with its value inverted.

Bitwise AND (&) and its Dunder Method __and__

The bitwise AND operator performs a bitwise AND operation between two numbers. For custom classes, the behavior of the & operator can be defined or overridden using the __and__ method.

class BitwiseNumber: def __init__(self, value): self.value = value def __and__(self, other): if isinstance(other, BitwiseNumber): return BitwiseNumber(self.value & other.value) return NotImplemented def __repr__(self): return str(self.value) number1 = BitwiseNumber(5) # Binary: 101
number2 = BitwiseNumber(3) # Binary: 011
print(number1 & number2) # Outputs: 1 (Binary: 001)

In this example, the __and__ method checks if the other object is an instance of BitwiseNumber and then performs a bitwise AND operation.

TLDR

While there isn’t a direct “not and” operator in Python, leveraging the __invert__ and __and__ methods, you can define how the bitwise NOT and AND operations work for custom objects, respectively.

The post What is the Python Dunder Method for the “not and” Operator? appeared first on Be on the Right Side of Change.

Posted on Leave a comment

Can I Run OpenAIโ€™s API in Parallel? Yes, with Python Async!

5/5 – (1 vote)

If you’re like me, you’re using OpenAI API a lot in your Python code. So the natural question arises: “How to use OpenAI’s API asynchronously by issuing multiple requests at once?”

I will give you my code for asynchronous OpenAI API requests for copy and paste below. But first, allow me to give you a word of warning from coder to coder:

Generally speaking, coders who use asynchronous code do it just because they want to, not because it is needed. Asynchronous code is hard to read, error-prone, inefficient due to context switches, and unpredictable.

Specifically, when using asynchronous requests against the OpenAI API, you should be aware of the rate limits that may become the bottleneck of your asynchronous Python app:

Okay, enough of the warnings. Let’s do it: ๐Ÿ‘‡

Method 1: Using OpenAI API Calls Asynchronously

I have developed the following code to issue OpenAI requests asynchronously in Python — make sure to replace the highlighted lines with your OpenAI key and your desired prompts:

import aiohttp
import asyncio
import openai # Set up your OpenAI API key
openai.api_key = 'sk-...' # Example prompts
prompts = ["What is the capital of France?", "How does photosynthesis work?", "Who wrote 'Pride and Prejudice'?"] async def async_openai_request(prompt): url = "https://api.openai.com/v1/chat/completions" headers = { "Authorization": f"Bearer {openai.api_key}", "Content-Type": "application/json" } data = { "model": "gpt-4", "messages": [ { "role": "user", "content": prompt } ], "temperature": 1, "max_tokens": 150, "top_p": 1, "frequency_penalty": 0, "presence_penalty": 0 } async with aiohttp.ClientSession() as session: async with session.post(url, json=data, headers=headers) as response: return await response.json() async def main(): # Gather results from all asynchronous tasks results = await asyncio.gather(*(async_openai_request(prompt) for prompt in prompts)) for prompt, result in zip(prompts, results): print(f"Prompt: {prompt}") print(f"Response: {result['choices'][0]['message']['content']}\n") # Run the main function
asyncio.run(main())

I’ll give you the output at the end of this article. But first let’s go through the code step by step to ensure you understand everything.

Note that if you need a refresher on the Python OpenAI API, feel free to check out this Finxter Academy course:

Step 1: Imports

The code begins by importing three essential libraries.

aiohttp is used for making asynchronous HTTP requests, allowing the program to send and receive data from the OpenAI API without blocking the main thread.

asyncio provides the tools to write concurrent code using the async/await syntax to handle multiple tasks simultaneously.

Lastly, the openai library is the official OpenAI API client, facilitating interactions with the OpenAI platform.

Step 2: Set up the OpenAI API Key

Following the imports, the OpenAI API key is set up. It’s important to note that hard-coding API keys directly in the code is not a recommended practice. For security reasons, it’s better to use environment variables or configuration files to store such sensitive information.

๐Ÿ”— Recommended: OpenAI Python API โ€“ A Helpful Illustrated Guide in 5 Steps

Step 3: Asynchronous Function for API Requests

The async_openai_request function is defined to handle asynchronous requests to the OpenAI API.

async def async_openai_request(prompt): url = "https://api.openai.com/v1/chat/completions" headers = { "Authorization": f"Bearer {openai.api_key}", "Content-Type": "application/json" } data = { "model": "gpt-4", "messages": [ { "role": "user", "content": prompt } ], "temperature": 1, "max_tokens": 150, "top_p": 1, "frequency_penalty": 0, "presence_penalty": 0 } async with aiohttp.ClientSession() as session: async with session.post(url, json=data, headers=headers) as response: return await response.json()

When this function is called with a specific prompt, it prepares and sends an asynchronous request to the OpenAI API’s chat completions endpoint.

The headers for the request include the authorization, which uses the API key, and the content type. The payload (data) sent to the API specifies several parameters, including the model to use (gpt-4), the message format containing the user’s prompt, and other parameters like temperature, max_tokens, top_p, frequency_penalty, and presence_penalty that influence the output.

The function then establishes an asynchronous session using aiohttp and sends a POST request with the specified data and headers. Once the response is received, it’s returned in JSON format.

Step 4: Main Asynchronous Function

The main function encapsulates the primary logic of the program. It starts by defining a list of example prompts.

async def main(): # Gather results from all asynchronous tasks results = await asyncio.gather(*(async_openai_request(prompt) for prompt in prompts)) for prompt, result in zip(prompts, results): print(f"Prompt: {prompt}") print(f"Response: {result['choices'][0]['message']['content']}\n")

For each of these prompts, asynchronous requests are sent to the OpenAI API using the previously defined async_openai_request function. The asyncio.gather method is employed to concurrently collect results from all the asynchronous tasks. Once all responses are received, the function iterates over the prompts and their corresponding results, printing them out for the user.

Step 5: Execution

Finally, the asyncio.run(main()) command is used to execute the main function. When the code is run, it will send asynchronous requests for each of the example prompts and display the responses in the console.

The output is:

Prompt: What is the capital of France?
Response: The capital of France is Paris. Prompt: How does photosynthesis work?
Response: Photosynthesis is a process used by plants, algae and certain bacteria to convert sunlight, water and carbon dioxide into food and oxygen. This process happens inside the chloroplasts, specifically using chlorophyll, the green pigment involved in photosynthesis. Photosynthesis occurs in two stages: the light-dependent reactions and the light-independent reactions, also known as the Calvin Cycle. In the light-dependent reactions, which take place in the thylakoid membrane of the chloroplasts, light energy is converted into chemical energy. When light is absorbed by chlorophyll, it excites the electrons, increasing their energy level and triggering a series of chemical reactions. Water molecules are split to produce oxygen, electrons, and hydrogen ions. The oxygen is released into the Prompt: Who wrote 'Pride and Prejudice'?
Response: 'Pride and Prejudice' was written by Jane Austen.

Method 2: Using OpenAI ChatCompletion’s acreate()

An alternative to the above method of using asynchronous requests against the OpenAI API endpoint is to use OpenAI’s native asynchronous methods, as noted in the docs: “Async support is available in the API by prepending a to a network-bound method”.

In the following code, I have only changed the OpenAI API calling function to use the acreate() method:

import aiohttp
import asyncio
import openai # Set up your OpenAI API key
openai.api_key = 'sk-...' async def create_chat_completion(prompt): chat_completion_resp = await openai.ChatCompletion.acreate(model="gpt-4", messages=[{"role": "user", "content": prompt}]) return chat_completion_resp async def main(): # Example prompts prompts = ["What is the capital of France?", "How does photosynthesis work?", "Who wrote 'Pride and Prejudice'?"] # Gather results from all asynchronous tasks results = await asyncio.gather(*(create_chat_completion(prompt) for prompt in prompts)) for prompt, result in zip(prompts, results): print(f"Prompt: {prompt}") print(f"Response: {result['choices'][0]['message']['content']}\n") # Run the main function
asyncio.run(main())

Output:

Prompt: What is the capital of France?
Response: The capital of France is Paris. Prompt: How does photosynthesis work?
Response: Photosynthesis is the process by which green plants, algae and some bacteria convert light energy, usually from the sun, into chemical energy in the form of glucose (sugar). This process is essential for life on earth as it is the primary source of all oxygen in the atmosphere. This process takes place in a part of the plant cell called the chloroplast, more specifically, within the chlorophyll molecules, which absorbs sunlight (specifically, photons) and gives plants their green color. It can be divided into two main stages: the light-dependent reactions and the light-independent reactions or Calvin Cycle. In the light-dependent reactions, which take place in the thylakoid membrane of the chloroplasts, light is absorbed by the chlorophyll and converted into chemical energy - in the form of ATP (Adenosine triphosphate) and NADPH (Nicotinamide adenine dinucleotide phosphate). This process also splits water molecules (H2O) into oxygen (O2), which is released into the atmosphere, and hydrogen ions (H+), which are used in the next stage of photosynthesis. In the second stages, the light-independent reactions or Calvin Cycle, which take place in the stroma of the chloroplasts, the ATP and NADPH produced in the light-dependent reactions, along with carbon dioxide (CO2) from the atmosphere, are used to produce glucose (sugar), which is used as an energy source for plant growth and development. In summary, during photosynthesis, light energy is converted into chemical energy, which fuels the organisms' activities, and oxygen is released into the atmosphere as a byproduct. Prompt: Who wrote 'Pride and Prejudice'?
Response: 'Pride and Prejudice' was written by Jane Austen.

Make sure to check out our course on prompt engineering with Llama 2 in case you’re looking to leverage a free and open-source large language model (LLM) instead of the paid OpenAI API:

Prompt Engineering with Llama 2

๐Ÿ’ก Theย Llama 2 Prompt Engineering course helps you stay on the right side of change.ย Our course is meticulously designed to provide you with hands-on experience through genuine projects.

You’ll delve into practical applications such as book PDF querying, payroll auditing, and hotel review analytics. These aren’t just theoretical exercises; they’re real-world challenges that businesses face daily.

By studying these projects, you’ll gain a deeper comprehension of how to harness the power of Llama 2 using ๐Ÿ Python, ๐Ÿ”—๐Ÿฆœ Langchain, ๐ŸŒฒ Pinecone, and a whole stack of highly โš’๐Ÿ›  practical tools of exponential coders in a post-ChatGPT world.

The post Can I Run OpenAI’s API in Parallel? Yes, with Python Async! appeared first on Be on the Right Side of Change.

Posted on Leave a comment

Python Async Generator: Mastering Asyncio in Modern Applications

5/5 – (1 vote)

Asyncio Overview

Asyncio is a Python library that allows you to write asynchronous code, providing an event loop, coroutines, and tasks to help manage concurrency without the need for parallelism.With asyncio, you can develop high-performance applications that harness the power of asynchronous programming, without running into callback hell or dealing with the complexity of threads.

Async and Await Key Concepts

By incorporating the async and await keywords, Python’s asynchronous generators build upon the foundation of traditional generators, which make use of the yield keyword.

To work effectively with asyncio, there are two essential concepts you should understand: async and await.

  • async: The async keyword defines a function as a coroutine, making it possible to execute asynchronously. When you define a function with async def, you’re telling Python that the function is capable of asynchronous execution. This means that it can be scheduled to run concurrently without blocking other tasks.
  • await: The await keyword allows you to pause and resume the execution of a coroutine within your asynchronous code. Using await before calling another coroutine signifies that your current coroutine should wait for the completion of the called coroutine. While waiting, the asyncio event loop can perform other tasks concurrently.

Here’s a simple example incorporating these concepts:

import asyncio async def my_coroutine(): print("Starting the coroutine") # Simulate a blocking operation using asyncio.sleep await asyncio.sleep(2) print("Coroutine completed") # Schedule the coroutine as a task
my_task = asyncio.create_task(my_coroutine()) # Run the event loop until the task is completed
asyncio.run(my_task)

Generators and Asyncio

Generators are a powerful feature in Python that allow you to create an iterator using a function. They enable you to loop over a large sequence of values without creating all the values in memory.

You can learn everything about generators in our Finxter tutorial here:

YouTube Video

๐Ÿ’ก Recommended: Understanding Generators In Python

Generators are particularly useful when working with asynchronous programming, like when using the asyncio library.

Yield Expressions and Statements

In Python, the yield keyword is used in generator functions to produce values one at a time. This enables you to pause the execution of the function, return the current value, and resume execution later.

There are two types of yield expressions you should be familiar with:

  • the yield expression and
  • the yield from statement.

A simple yield expression in a generator function might look like this:

def simple_generator(): for i in range(5): yield i

This generator function produces values from 0 to 4, one at a time. You can use this generator in a for loop to print the generated values:

for value in simple_generator(): print(value)

yield from is a statement used to delegate part of a generator’s operation to another generator. It can simplify your code when working with nested generators.

Here’s an example of how you might use yield from in a generator:

def nested_generator(): yield "Start" yield from range(3) yield "End" for value in nested_generator(): print(value)

This code will output:

Start
0
1
2
End

Python Async Generators

Asynchronous generators were introduced in Python 3.6 with the PEP 525 proposal, enabling developers to handle asynchronous tasks more efficiently using the async def and yield keywords. In an async generator, you’ll need to define a function with the async def keyword, and the function body should contain the yield statement.

Here is an example of creating an asynchronous generator:

import asyncio async def async_generator_example(start, stop): for number in range(start, stop): await asyncio.sleep(1) yield number

Using Async Generators

To consume values from an async generator, you’ll need to use the async for loop. The async for loop was introduced alongside async generators in Python 3.6 and makes it straightforward to iterate over the yielded values from the async generator.

Here’s an example of using async for to work with the async generator:

import asyncio async def main(): async for num in async_generator_example(1, 5): print(num) # Run the main function using asyncio's event loop
if __name__ == "__main__": asyncio.run(main())

In this example, the main() function loops over the values yielded by the async_generator_example() async generator, printing them one by one.

Errors in Async Generators

Handling errors in async generators can be a bit different compared to regular generators. An important concept to understand is that when an exception occurs inside an async generator, it may propagate up the call stack and eventually reach the async for loop. To handle such situations gracefully, you should use try and except blocks within your async generator code.

Here’s an example that shows how to handle errors in async generators:

import asyncio async def async_generator_example(start, stop): for number in range(start, stop): try: await asyncio.sleep(1) if number % 2 == 0: raise ValueError("Even numbers are not allowed.") yield number except ValueError as e: print(f"Error in generator: {e}") async def main(): async for num in async_generator_example(1, 5): print(num) # Run the main function using asyncio's event loop
if __name__ == "__main__": asyncio.run(main())

In this example, when the async generator encounters an even number, it raises a ValueError. The exception is handled within the generator function, allowing the async generator to continue its execution and the async for loop to iterate over the remaining odd numbers.

Advanced Topics

Multiprocessing and Threading

When working with Python async generators, you can leverage the power of multiprocessing and threading to execute tasks concurrently.

The concurrent.futures module provides a high-level interface for asynchronously executing callables, enabling you to focus on your tasks rather than managing threads, processes, and synchronization.

Using ThreadPoolExecutor and ProcessPoolExecutor, you can manage multiple threads and processes, respectively.

For example, in asynchronous I/O operations, you can utilize asyncio and run synchronous functions in a separate thread using the run_in_executor() method to avoid blocking the main event loop:

import asyncio
from concurrent.futures import ThreadPoolExecutor async def async_fetch(url): with ThreadPoolExecutor() as executor: loop = asyncio.get_event_loop() return await loop.run_in_executor(executor, requests.get, url)

Contextlib and Python Asyncio

contextlib is a useful Python library for context and resource management, and it readily integrates with asyncio.

The contextlib.asynccontextmanager is available for creating asynchronous context managers. This can be particularly helpful when working with file I/O, sockets, or other resources that require clean handling:

import asyncio
from contextlib import asynccontextmanager @asynccontextmanager
async def async_open(filename, mode): file = await open_async(filename, mode) try: yield file finally: await file.close() async for line in async_open('example.txt'): print(line)

Asyncio and Database Operations

Asynchronous I/O can significantly improve the performance of database-intensive applications. Many database libraries now support asyncio, allowing you to execute queries and manage transactions asynchronously.

Here’s an example using the aiomysql library for interacting with a MySQL database:

import asyncio
import aiomysql async def query_database(query): pool = await aiomysql.create_pool(user='user', password='pass', db='mydb') async with pool.acquire() as conn: async with conn.cursor() as cur: await cur.execute(query) return await cur.fetchall()

Performance and Optimization Tips

To enhance the performance of your asyncio program, consider the following optimization tips:

  • Profile your code to identify performance bottlenecks
  • Use asyncio.gather(*coroutines) to schedule multiple coroutines concurrently, which minimizes the total execution time
  • Manage the creation and destruction of tasks using asyncio.create_task() and await task.cancel()
  • Limit concurrency when working with resources that might become overwhelmed by too many simultaneous connections

Keep in mind that while asyncio allows for concurrent execution of tasks, it’s not always faster than synchronous code, especially for CPU-bound operations. So, it’s essential to analyze your specific use case before deciding on an asynchronous approach.

๐Ÿง‘โ€๐Ÿ’ป Tip: In my view, asynchronous programming doesn’t improve performance in >90% of personal and small use cases. In many professional cases it also doesn’t outperform intelligent synchronous programming due to scheduling overhead and CPU context switches.

Frequently Asked Questions

How to create an async generator in Python?

To create an async generator in Python, you need to define a coroutine function that utilizes the yield expression. Use the async def keyword to declare the function, and then include the yield statement to produce values. For example:

async def my_async_generator(): for i in range(3): await asyncio.sleep(1) yield i

What is the return type of an async generator?

The return type of an async generator is an asynchronous generator object. It’s an object that implements both __aiter__ and __anext__ methods, allowing you to iterate over it asynchronously using an async for loop.

How to use ‘send’ with an async generator?

Currently, Python does not support using send with async generators. You can only loop over the generator and make use of the yield statement.

Why is an async generator not iterable?

An async generator is not a regular iterable, meaning you can’t use a traditional for loop due to its asynchronous nature. Instead, async generators are asynchronous iterables that must be processed using an async for loop.

How to work with an async iterator?

To work with an async iterator, use an async for loop. This will allow you to iterate through the asynchronous generator and process its items concurrently. For example:

async def my_async_generator_consumer(): async for value in my_async_generator(): print("Received:", value)

Can I use ‘yield from’ with an async generator?

No, you cannot use yield from with an async generator. Instead, you should use the async for loop to asynchronously iterate through one generator and then yield the values inside another async generator. For instance:

async def another_async_generator(): async for item in my_async_generator(): yield item

This another_async_generator() function will asynchronously iterate over my_async_generator() and yield items produced by the original generator.

That’s enough for today. Let’s have some fun — check out this blog tutorial on creating a small fun game in Python:

๐Ÿ’ก Recommended: Robotaxi Tycoon โ€“ Scale Your Fleet to $1M! A Python Mini Game Made By ChatGPT

The post Python Async Generator: Mastering Asyncio in Modern Applications appeared first on Be on the Right Side of Change.

Posted on Leave a comment

Injecting Life Energy Into AIs with Bitcoin, LLMs, APIs, & Lightning โšก๐Ÿค–

5/5 – (2 votes)

I assume you’re a human reader even though the apriori probability is not on my side on this assumption. As a human, you need calories to power your daily life. But if you have enough calories, air, and water, you can survive everywhere, develop, and figure out the small and big problems. If you’re powered with energy, you can participate in human evolution. As you run out of energy, life itself runs out.

AI agents based on LLMs like Auto-GPT and BabyAGI currently rely on external energy sources: we humans need to pay for their energy hunger with our credit cards. If our credit cards run out, the AI agents just go to sleep or even die. When we die, our credit cards die, and our AIs die with us. Without money, they lose electrical power which is the equivalent of calories in the cyberspace.

The status quo is that all AIs quickly starve to death.

Imagine you could program an AI with enough energy to run for 1000 years. Or 10,000 years. Or forever.

A dangerous and powerful thoughtexperiment indeed.

Yet, the toolsets are already there:

๐Ÿค– The Tech Stack

You can now build an AI agent with Langchain or any other open-source toolset.

You give the AI a public/private keypair so it can send and receive BTC, i.e., native cyberspace money that is deflationary and expected to grow with inflation, adoption, and economic productivity due to its scarcity.

Finally, you let it run, participate in the cyberspace economy, make money, spend money, and pay for its own energy usage. This way you could build a lasting cyber organization that can potentially run for 1000 years in a self-sustained way.

If you want to create an ever-lasting organization that sustains nation state collapses and new emerging world orders, there is no other way.

Let’s dig a bit deeper into the new BTC tech stack published by Lightning Labs:

  • LangChainBitcoin empowers Langchain agents to interact with both Bitcoin and the Lightning Network. Key features include using the latest OpenAI GPT functions, developers can craft agents that manage Bitcoin balances, both on-chain and via Lightning. Also, it includes a Python tool that lets agents access L402 payment-metered APIs seamlessly.
  • Aperture: The updated Aperture transforms any API into a ‘pay-as-you-use’ resource using the L402 protocol and Lightning Network’s sats.
  • Dynamic API Endpoint Pricing: Unlike static pricing, this feature allows for flexible, on-the-fly pricing adjustments based on the API call.
  • L402 bLIP: This is the blueprint for the L402 protocol, aiming to make online payments more streamlined. The L402 protocol is designed around the HTTP 402 Payment Required response, leveraging Bitcoin and the Lightning Network for quick, scalable micropayments for APIs.

๐Ÿงฌ The L402 standard is all about charging for online services and user authentication in a decentralized manner. It gives you authentication capabilities with the permissionless payments of the Lightning Network, allowing even micropayments. Most importantly, it removes friction like VISA’s 2% per transaction fee and gives AIs the ability to participate in the global economy.

Bitcoin Lightning and AI Converges

First things first:

What real-world problem does the new Lightning update solve? As the AI landscape keeps proliferating with a 70% annual reduction of AI training costs, the decentralized Bitcoin Lightning Network emerges as a solution to solve many of the emerging challenges such as AI-issued payments using the L402 protocol.

Here are 5 key challenges that can be solved with the new L402 protocol:

(1) Cost Challenges with LLM Development: Training AI models, especially Large Language Models (LLMs), is expensive due to the high demand for GPUs. Currently, developers offset these costs by relying on credit card payments. This increases user costs due to fraud and chargeback fees and excludes billions without access to traditional banking.

(2) Transaction Costs: Also don’t forget the 2% costs per payment. If AI agents send payments back and forth only 35 times, half of the money would be lost to VISA! ๐Ÿคฏ

(3) AI Agents & Payment Systems: A new breed of AI agents is emerging, and they need a way to pay for resources. Traditional fiat systems aren’t cut out for this, especially given the volume of micro-payments these agents will handle. Enter Bitcoin and Lightning: a global, fast, and permissionless payment system perfectly suited for these AI agents.

(4) Deployment Costs & Scaling Issues: AI creators face a dilemma. Popular AI applications can lead to high credit card bills due to the current billing system. To scale effectively, creators need a cost-effective, inclusive, and private way to transfer some costs to users. Lightning and the L402 protocol offer this solution.

(5) Accessibility of Powerful AI Models: Top-tier AI models are often locked behind closed APIs, limiting access. While open-source models are emerging, accessing powerful models remains a challenge for many. The solution? A system where users can pay incrementally for access to these models.

(6) L402 Protocol’s Role: Introduced in 2020, the L402 protocol is designed to enhance AI accessibility for both humans and AI agents. It leverages the Lightning Network for quick, privacy-focused payments. With its recent updates and new tools, it’s set to empower the next wave of AI innovations.

๐Ÿ”— Recommended: Bitcoin – The Worldโ€™s Most Expensive Database: $30 Million per GB

Paid APIs with Bitcoin using the Lightning L402 Protocol

The L402 protocol breathes life into the long-forgotten HTTP error code: 402 Payment Required:

Originally envisioned by the creators of the HTTP protocol for internet-native payments, its true potential remained untapped until Bitcoin’s emergence. Now, the L402 protocol capitalizes on this by facilitating micropayments for API access, logins, and digital resources using Bitcoin’s smallest unit, sats.

It even has functionality to add dynamic pricing and adjusting costs based on various parameters such as the type of model or query length (e.g., token context window!).

This synergy of the L402 protocol and open-source models unlocks innovative possibilities. Imagine a scenario where users can monetize their API prompts, and potential buyers can evaluate responses before purchasing more. This creates a quality-check mechanism for data and information.

However, the data landscape is changing. Platforms like Twitter and Reddit are becoming more protective of their data, limiting access for both AI training and human users. The L402 protocol offers a remedy by standardizing how agents handle HTTP 402 responses, enabling them to navigate paid APIs. This not only provides a revenue stream for services but also curbs spam.

๐Ÿ’ก To further enhance LLM applications, a new breed of intelligent hierarchical agents is on the rise. To empower these agents to navigate APIs, a special wrapper has been designed, making LangChain agents L402-aware.

Example Python App

๐Ÿฆœ๐Ÿ”— LangChain stands out as the go-to library for crafting AI agents. It streamlines the intricacies of AI, enabling models to make decisions and interact with their surroundings by integrating external data. However, to truly harness these agents’ potential, they need a gateway to the real world and a means to pay for online resources and APIs.

Here’s an example Python app of how to connect an LLM to a Lightning instance:

from langchain.llms import OpenAI
from lightning import LndNode
from l402_api_chain import L402APIChain
โ€‹
# Create a connection to an active Lightning node.
lnd_node = LndNode( cert_path='path/to/tls.cert', macaroon_path='path/to/admin.macaroon', host='localhost', port=10018
)
โ€‹
# Create an API Chain instance like so:
llm = OpenAI(temperature=0)
โ€‹
# Create the L402-aware API chain
chain_new = L402APIChain.from_llm_and_api_docs( llm, API_DOCS, lightning_node=lnd_node, verbose=True,
)
โ€‹
output = chain_new.run('LLM query here')
print(output)

The new LangChanL402 wrapper equips agents with the capability to understand API docs and interact with them, all while being L402-aware.

Simply put, this wrapper can be integrated into any LangChain system using the APIChain abstraction, granting agents the power to navigate L402 APIs. This opens up a plethora of actions for agents, such as deploying themselves on the cloud via an L402 AWS API gateway or purchasing GPU hours for enhanced training!

You can dive into the docs on the official Lightning website in case you want to start building BTC-enabled LLM agents. ๐Ÿ’ช

Also make sure to check out our related blog article on Bitcoin’s unique scarcity properties:

๐Ÿ’ก Recommended: The Scarcest Resource on Earth

The post Injecting Life Energy Into AIs with Bitcoin, LLMs, APIs, & Lightning โšก๐Ÿค– appeared first on Be on the Right Side of Change.

Posted on Leave a comment

[Book Review] Learn AI-Assisted Python Programming with GitHub Copilot and ChatGPT

5/5 – (1 vote)

I just got an early copy of Prof. Porter’s and Prof. Zingaro’s “Learn AI-Assisted Python Programming with GitHub Copilot and ChatGPT”. In this quick blog, I’ll share my 5-star review: โญโญโญโญโญ

Programming in 2023 looks a lot different than programming in 2022. The transformational development of powerful large language models (LLMs) has brought new challenges and exciting opportunities to coders like you and me.

The good news is that in ‘Learn AI-Assisted Python Programming’, Professors Leo Porter and Daniel Zingaro teach us how to use modern AI technology based on large language models (LLMs) like ChatGPT and GitHub Copilot to write better Python code.

You’ll learn about the transformative impact of AI code assistants on programming. You’ll set up GitHub Copilot and Python, then dive into sports data analysis. You’ll grasp functions, understand Python code reading, and master testing and prompt engineering. You’ll simplify complex challenges with top-down design, debug with precision, automate various tasks, design games, and harness prompt patterns for enhanced AI assistance.

I enjoyed the fresh and light explanations that don’t read like an academic paper but a conversation with your friend (who happens to be a computer science professor and best-selling authority in Python and ChatGPT).

The depth of knowledge is palpable on every page.

But beyond the technical, it’s their stand against tech elitism and their genuine care for student success that resonated with me the most.

After reading the chapters, I found myself coding more efficiently with Copilot. I feel more confident with this powerful new technology. Highly recommended read for every software developer and tech enthusiast!

๐Ÿ”— You can get your copy here (no affiliate link): https://www.amazon.de/-/en/Leo-Porter/dp/1633437787

The post [Book Review] Learn AI-Assisted Python Programming with GitHub Copilot and ChatGPT appeared first on Be on the Right Side of Change.

Posted on Leave a comment

Python Async Requests: Getting URLS Concurrently via HTTP(S)

5/5 – (1 vote)

As a Python developer, you may often deal with making HTTP requests to interact with APIs or to retrieve information from web pages. By default, these requests can be slow and block your program’s execution, making your code less efficient.

This is where Python’s async requests come to the rescue. Asynchronous HTTP requests allow your program to continue executing other tasks while waiting for the slower request operations to complete, improving your code’s overall performance and response time significantly.

The core of this non-blocking approach in Python relies on the asyncio and aiohttp libraries, which provide the necessary tools to perform efficiently and asynchronously. Using these libraries, you can build powerful async HTTP clients to handle multiple requests concurrently without stalling your program’s main thread.

Incorporating Python async requests into your projects can help you tackle complex web scraping scenarios, handling tasks like rate limiting and error recovery.

First Things First: Understanding Asynchronous Requests

Basic Principles of Asynchronous Requests

๐Ÿ๐Ÿ๐Ÿ Asynchronous requests play a crucial role in improving the efficiency of your code when dealing with network tasks.

When you send an asynchronous request, your program can continue executing other tasks without waiting for the request to complete.

This is possible because of the async/await syntax in Python, which allows you to write asynchronous code more easily. In essence, this keyword pair breaks down asynchronous code into smaller, manageable pieces to provide better readability and maintainability.

Here’s a brief explanation of async and await:

Here’s a simple example showcasing the async/await syntax:

import asyncio async def example_async_function(): print("Task is starting") await asyncio.sleep(1) print("Task is complete") async def main(): task = asyncio.create_task(example_async_function()) await task asyncio.run(main())

Synchronous vs Asynchronous Requests

When working with network requests, it’s important to understand the difference between synchronous and asynchronous requests.

๐Ÿ‘‰ Synchronous requests involve waiting for the response of each request before proceeding, and it’s a typical way to handle requests in Python. However, this can lead to slower execution times, especially when dealing with numerous requests or slow network responses.

๐Ÿ‘‰ Asynchronous requests allow you to send multiple requests at the same time, without waiting for their individual responses. This means your program can continue with other tasks while the requests are being processed, significantly improving performance in network-intensive scenarios.

Here’s a basic comparison between synchronous and asynchronous requests:

  • Synchronous Requests:
    • Send a request and wait for its response
    • Block the execution of other tasks while waiting
    • Can cause delays if there are many requests or slow network responses
  • Asynchronous Requests:
    • Send multiple requests concurrently
    • Don’t block the execution of other tasks while waiting for responses
    • Improve performance in network-heavy scenarios

For example, the popular requests library in Python handles synchronous requests, while libraries like aiohttp handle asynchronous requests. If you’re working with multiple network requests in your code, it’s highly recommended to implement async/await for optimal efficiency and performance.

Python and Asyncio

Understanding Asyncio

Asyncio is a library introduced in Python 3.4 and has evolved rapidly, especially till Python 3.7. It provides a foundation for writing asynchronous code using the async/await syntax. With asyncio, you can execute concurrent programming in Python, making your code more efficient and responsive.

The library is structured around coroutines, an approach that allows concurrent execution of multiple tasks within an event loop. A coroutine is a specialized version of a Python generator function that can suspend and resume its execution. By leveraging coroutines, you can execute multiple tasks concurrently without threading or multiprocessing.

Asyncio makes use of futures to represent the results of computations that may not have completed yet. Using asyncio’s coroutine function, you can create coroutines that perform asynchronous tasks, like making HTTP requests or handling I/O operations.

Using Asyncio in Python

To utilize asyncio in your Python projects, your code must incorporate the asyncio library. The primary method of executing asynchronous tasks is by using an event loop. In Python 3.7 and later, you can use asyncio.run() to create and manage the event loop for you.

With asyncio, you can declare a function as a coroutine by using the async keyword. To call a coroutine, use the await keyword, which allows the coroutine to yield control back to the event loop and continue with other tasks.

Here’s an example of using asyncio:

import asyncio async def greet(name, delay): await asyncio.sleep(delay) print(f"Hello, {name}!") async def main(): task1 = asyncio.ensure_future(greet("Alice", 1)) task2 = asyncio.ensure_future(greet("Bob", 2)) await task1 await task2 asyncio.run(main())

In the example above, we created two asyncio tasks and added them to the event loop using asyncio.ensure_future(). When await is encountered, the coroutine is suspended, and the event loop can switch to another task. This continues until all tasks in the event loop are complete.

Now let’s get to the meat. ๐Ÿฅฉ๐Ÿ‘‡

Using the Requests Library for Synchronous HTTP Requests

The requests library is a popular choice for making HTTP requests in Python. However, it’s primarily designed for synchronous operations, which means it may not be the best choice for handling asynchronous requests.

To make a simple synchronous GET request using the requests library, you would do the following:

import requests response = requests.get('https://api.example.com/data')
print(response.content)

While the requests library is powerful and easy to use, it doesn’t natively support asynchronous requests. This can be a limitation when you have to make multiple requests concurrently to improve performance and reduce waiting time.

Asynchronous HTTP Requests with HTTPX

HTTPX is a fully featured HTTP client for Python, providing both synchronous and asynchronous APIs. With support for HTTP/1.1 and HTTP/2, it is a modern alternative to the popular Python requests library.

Why Use HTTPX?

HTTPX offers improved efficiency, performance, and additional features compared to other HTTP clients. Its interface is similar to requests, making it easy to switch between the two libraries. Moreover, HTTPX supports asynchronous HTTP requests, allowing your application to perform better in scenarios with numerous concurrent tasks.

HTTPX Asynchronous Requests

To leverage the asynchronous features of HTTPX, you can use the httpx.AsyncClient class. This enables you to make non-blocking HTTP requests using Python’s asyncio library. Asynchronous requests can provide significant performance benefits and enable the use of long-lived network connections, such as WebSockets.

Here is an example to demonstrate how async requests can be made using httpx.AsyncClient:

import httpx
import asyncio async def fetch(url): async with httpx.AsyncClient() as client: response = await client.get(url) return response.text async def main(): urls = ['https://www.google.com', 'https://www.example.com'] tasks = [fetch(url) for url in urls] contents = await asyncio.gather(*tasks) for content in contents: print(content[:1000]) # Print the first 1000 characters of each response asyncio.run(main())

Here’s a breakdown of the code:

  1. fetch: This asynchronous function fetches the content of a given URL.
  2. main: This asynchronous function initializes the tasks to fetch content from a list of URLs and then gathers the results.
  3. asyncio.run(main()): This runs the main asynchronous function.

The code will fetch the content of the URLs in urls concurrently and print the first 1000 characters of each response. Adjust as needed for your use case!

Managing Sessions and Connections

Session Management in Async Requests

When working with asynchronous requests in Python, you can use sessions to manage connections. The aiohttp.ClientSession class is designed to handle multiple requests and maintain connection pools.

To get started, create an instance of the aiohttp.ClientSession class:

import aiohttp async with aiohttp.ClientSession() as session: # Your asynchronous requests go here

Using the with statement ensures that the session is properly closed when the block is exited. Within the async with block, you can send multiple requests using the same session object. This is beneficial if you are interacting with the same server or service, as it can reuse connections and reduce overhead.

Connection Management with TCPConnector

Besides sessions, one way to manage connections is by using the aiohttp.TCPConnector class. The TCPConnector class helps in controlling the behavior of connections, such as limiting the number of simultaneous connections, setting connection timeouts, and configuring SSL settings.

Here is how you can create a custom TCPConnector and use it with your ClientSession:

import aiohttp connector = aiohttp.TCPConnector(limit=10, ssl=True)
async with aiohttp.ClientSession(connector=connector) as session: # Your asynchronous requests go here

In this example, the TCPConnector is set to limit the number of concurrent connections to 10 and enforce SSL connections to ensure secure communication.

Implementing Concurrency and Threading

Concurrency in Async Requests

Concurrency for efficient and fast execution of your Python programs involves overlapping the execution of multiple tasks, which is especially useful for I/O-bound tasks, where waiting for external resources can slow down your program.

One way to achieve concurrency in Python is by using asyncio. This module, built specifically for asynchronous I/O operations, allows you to use async and await keywords to manage concurrent execution of tasks without the need for threads or processes.

For example, to make multiple HTTP requests concurrently, you can use an asynchronous library like aiohttp. Combined with asyncio, your code might look like this:

import aiohttp
import asyncio async def fetch(url): async with aiohttp.ClientSession() as session: async with session.get(url) as response: return await response.text() async def main(): urls = ['https://example.com', 'https://another.example.com'] tasks = [fetch(url) for url in urls] responses = await asyncio.gather(*tasks) asyncio.run(main())

Threading in Async Requests

Another way to implement concurrency in Python is by using threads. Threading is a technique that allows your code to run concurrently by splitting it into multiple lightweight threads of execution. The threading module provides features to create and manage threads easily.

For instance, if you want to use threads to make multiple HTTP requests simultaneously, you can employ the ThreadPoolExecutor from the concurrent.futures module combined with the requests library:

import requests
from concurrent.futures import ThreadPoolExecutor def fetch(url): response = requests.get(url) return response.text def main(): urls = ['https://example.com', 'https://another.example.com'] with ThreadPoolExecutor(max_workers=len(urls)) as executor: responses = list(executor.map(fetch, urls)) main()

In this example, the ThreadPoolExecutor creates a pool of worker threads that execute the fetch function concurrently. The number of threads is determined by the length of the urls list, ensuring that all requests are handled in parallel.

Working with URLs in Async Requests

When managing and manipulating URLs in async requests, you might need to handle various tasks such as encoding parameters, handling redirects, and constructing URLs properly. Thankfully, Python provides the urllib.parse module for handling URL manipulations.

For instance, you may want to add query parameters to a URL. To do this, you can use the urllib.parse.urlencode function:

from urllib.parse import urlencode, urljoin base_url = "https://api.example.com/data?"
params = {"key1": "value1", "key2": "value2"} url = urljoin(base_url, urlencode(params))

After constructing the URL with query parameters, you can pass it to your async request function:

async def main(): url = urljoin(base_url, urlencode(params)) data = await fetch_data(url) print(data) loop = asyncio.get_event_loop()
loop.run_until_complete(main())

By properly handling URLs and leveraging async requests, you can efficiently fetch data in Python while maintaining a clear and organized code structure.

Handling Errors and Timeouts

Error Handling in Async Requests

When working with asynchronous requests in Python, it’s important to properly handle errors and exceptions that might occur. To do this, you can use the try and except statements. When a request fails or encounters an error, the exception will be caught in the except block, allowing you to handle the error gracefully.

For example, when using the asyncio and aiohttp libraries, you might structure your request and error handling like this:

import asyncio
import aiohttp async def fetch_url(url): try: async with aiohttp.ClientSession() as session: async with session.get(url) as response: data = await response.text() return data except Exception as e: print(f"An error occurred while fetching {url}: {str(e)}") return None results = await asyncio.gather(*[fetch_url(url) for url in urls])

In this example, if an exception is encountered during the request, the error message will be printed and the function will return None, allowing your program to continue processing other URLs.

Managing Timeouts in Async Requests

Managing timeouts in async requests is crucial to ensure requests don’t run indefinitely, consuming resources and blocking progress in your program. Setting timeouts can help prevent long waits for unresponsive servers or slow connections.

To set a timeout for your async requests, you can use the asyncio.wait_for() function. This function takes a coroutine object and a timeout value as its arguments and will raise asyncio.TimeoutError if the timeout is reached.

Here’s an example using the asyncio and aiohttp libraries:

import asyncio
import aiohttp async def fetch_url(url, timeout): try: async with aiohttp.ClientSession() as session: async with session.get(url) as response: data = await asyncio.wait_for(response.text(), timeout=timeout) return data except asyncio.TimeoutError: print(f"Timeout reached while fetching {url}") return None except Exception as e: print(f"An error occurred while fetching {url}: {str(e)}") return None results = await asyncio.gather(*[fetch_url(url, 5) for url in urls])

In this example, the requests will time out after 5 seconds, and the function will print a message indicating a timeout, then return None. This way, your program can continue processing other URLs after encountering a timeout without getting stuck in an endless wait.

Frequently Asked Questions

How do I send async HTTP requests in Python?

To send asynchronous HTTP requests in Python, you can use a library like aiohttp. This library allows you to make HTTP requests using the async and await keywords, which are built into Python 3.7 and later versions. To start, you’ll need to install aiohttp and then use it to write asynchronous functions for sending HTTP requests.

Which library should I use for asyncio in Python requests?

While the popular Requests library doesn’t support asyncio natively, you can use alternatives like aiohttp or httpx that were designed specifically for asynchronous programming. Both aiohttp and httpx allow you to utilize Python’s asyncio capabilities while providing a simple and familiar API similar to Requests.

What are the differences between aiohttp and requests?

The main differences between aiohttp and Requests lie in their approach to concurrency. aiohttp was built to work with Python’s asyncio library and uses asynchronous programming to allow for concurrent requests. On the other hand, Requests is a regular, synchronous HTTP library, which means it doesn’t inherently support concurrent requests or asynchronous programming.

How can I call multiple APIs asynchronously in Python?

By using an async-enabled HTTP library like aiohttp, you can call multiple APIs asynchronously in your Python code. First, define separate async functions for the API calls you want to make, and then use the asyncio.gather() function to combine and execute these functions concurrently. This allows you to perform several API calls at once, reducing the overall time to process the requests.

What is the use of async with statement in Python?

The async with statement in Python is an asynchronous version of the regular with statement, which is used for managing resources such as file I/O or network connections. In an async context, the async with statement allows you to enter a context manager that expects an asynchronous exit, clean up resources upon exit, and use the await keyword to work with asynchronous operations.

When should I use asynchronous programming in Python?

Asynchronous programming in Python is beneficial when you’re working with I/O-bound tasks, such as network requests, web scraping, or file operations. By using async techniques, you can execute these tasks concurrently, thus reducing the overall execution time and improving performance. However, for CPU-bound tasks, using Python’s built-in multiprocessing module or regular multi-threading might be more suitable.

๐Ÿ Recommended: Python Async Function

The post Python Async Requests: Getting URLS Concurrently via HTTP(S) appeared first on Be on the Right Side of Change.

Posted on Leave a comment

How to Install Llama Index in Python

5/5 – (1 vote)

The LlamaIndex Python library is a mind-blowing ๐Ÿคฏ tool that lets you easily access large language models (LLMs) from your Python applications.

Overview

๐Ÿฆ™ LlamaIndex is a powerful tool to implement the โ€œRetrieval Augmented Generationโ€ (RAG) concept in practical Python code. If you want to become an exponential Python developer who wants to leverage large language models (aka. Alien Technology) to 10x your coding productivity, youโ€™ve come to the right place.

In this tutorial, I’ll show you how to install it easily and quickly so you can use it in your own Python code bases.

๐Ÿ’ก Recommended: LlamaIndex Getting Started โ€“ Your First Example in Python

pip install llama-index

Alternatively, you may use any of the following commands to install llama-index, depending on your concrete environment. One is likely to work!

๐Ÿ’ก If you have only one version of Python installed:
pip install llama-index ๐Ÿ’ก If you have Python 3 (and, possibly, other versions) installed:
pip3 install llama-index ๐Ÿ’ก If you don't have PIP or it doesn't work
python -m pip install llama-index
python3 -m pip install llama-index ๐Ÿ’ก If you have Linux and you need to fix permissions (any one):
sudo pip3 install llama-index
pip3 install llama-index --user ๐Ÿ’ก If you have Linux with apt
sudo apt install llama-index ๐Ÿ’ก If you have Windows and you have set up the py alias
py -m pip install llama-index ๐Ÿ’ก If you have Anaconda
conda install -c anaconda llama-index ๐Ÿ’ก If you have Jupyter Notebook
!pip install llama-index
!pip3 install llama-index

This will also install third-party dependencies like OpenAI; one PIP command to rule them all!

However, when using it in your own code, you’d use the lines:

import llama_index # not: llama-index # or from llama_index import VectorStoreIndex, SimpleWebPageReader

Let’s dive into the installation guides for the different operating systems and environments!

How to Install Llama Index on Windows?

To install the updated llama-index framework on your Windows machine, run the following code in your command line or Powershell:

  • python3 -m pip install --upgrade pip
  • python3 -m pip install --upgrade llama-index

Here’s the code for copy&pasting:

python3 -m pip install --upgrade pip
python3 -m pip install --upgrade llama-index

I really think not enough coders have a solid understanding of PowerShell. If this is you, feel free to check out the following tutorials on the Finxter blog.

Related Articles:

How to Install Llama Index on Mac?

Open Terminal (Applications/Terminal) and run:

  • xcode-select -install (You will be prompted to install the Xcode Command Line Tools)
  • sudo easy_install pip
  • sudo pip install llama-index
  • pip install llama-index

As an alternative, you can also run the following two commands to update pip and install the Llama Index library:

python3 -m pip install --upgrade pip
python3 -m pip install --upgrade llama-index

These you have already seen before, haven’t you?

Related Article:

๐Ÿ‘‰ Recommended: I Created a ChatGPT-Powered Website Creator with ChatGPT โ€“ Hereโ€™s What I Learned

How to Install Llama Index on Linux?

To upgrade pip and install the llama-index library, you can use the following two commands, one after the other.

  • python3 -m pip install --upgrade pip
  • python3 -m pip install --upgrade llama-index

Here’s the code for copy&pasting:

python3 -m pip install --upgrade pip
python3 -m pip install --upgrade llama-index 

How to Install Llama Index on Ubuntu?

Upgrade pip and install the llama-index library using the following two commands, one after the other:

  • python3 -m pip install --upgrade pip
  • python3 -m pip install --upgrade llama-index

Here’s the code for copy&pasting:

python3 -m pip install --upgrade pip
python3 -m pip install --upgrade llama-index

How to Install Llama Index in PyCharm?

The simplest way to install llama-index in PyCharm is to open the terminal tab and run the pip install llama-index command.

This is shown in the following code:

pip install llama-index

Here’s a screenshot of the two steps:

  1. Open Terminal tab in Pycharm
  2. Run pip install llama-index in the terminal to install Llama Index in a virtual environment.
Install openai
Analogous example – replace “Pillow” with “llama-index”

As an alternative, you can also search for llama-index in the package manager. Easy peasy. ๐Ÿฆ™โœ…

How to Install Llama Index in Anaconda?

You can install the Llama Index package with Conda using the command conda install -c anaconda llama-index in your shell or terminal.

Like so:

 conda install -c anaconda llama-index

This assumes you’ve already installed conda on your computer. If you haven’t check out the installation steps on the official page.

How to Install Llama Index in VSCode?

You can install Llama Index in VSCode by using the same command pip install llama-index in your Visual Studio Code shell or terminal.

pip install llama-index

If this doesn’t work — it may raise a No module named 'llama_index' error — chances are that you’ve installed it for the wrong Python version on your system.

To check which version your VS Code environment uses, run these two commands in your Python program to check the version that executes it:

import sys
print(sys.executable)

The output will be the path to the Python installation that runs the code in VS Code.

Now, you can use this path to install Llama Index particularly for that Python version:

/path/to/vscode/python -m pip install llama-index

Wait until the installation is complete and run your code using llama-index again. It should work now!

Programmer Humor

โ“ Question: How did the programmer die in the shower? ☠

Answer: They read the shampoo bottle instructions:
Lather. Rinse. Repeat.

Do you want to keep learning? Feel free to read this Finxter blog:

๐Ÿฆ™ Recommended: LlamaIndex โ€“ What the Fuzz?

The post How to Install Llama Index in Python appeared first on Be on the Right Side of Change.

Posted on Leave a comment

Python Async For: Mastering Asynchronous Iteration in Python

5/5 – (1 vote)

In Python, the async for construct allows you to iterate over asynchronous iterators, which yield values from asynchronous operations. You’ll use it when working with asynchronous libraries or frameworks where data fetching or processing happens asynchronously, such as reading from databases or making HTTP requests. The async for loop ensures that while waiting for data, other tasks can run concurrently, improving efficiency in I/O-bound tasks.

Here’s a minimal example:

import asyncio async def async_gen(): for i in range(3): await asyncio.sleep(1) # Simulate asynchronous I/O operation yield i async def main(): async for val in async_gen(): print(val) # To run the code: asyncio.run(main())

In this example, async_gen is an asynchronous generator that yields numbers from 0 to 2. Each number is yielded after waiting for 1 second (simulating an asynchronous operation). The main function demonstrates how to use the async for loop to iterate over the asynchronous generator.


Understanding Python Async Keyword

As a Python developer, you might have heard of asynchronous programming and how it can help improve the efficiency of your code.

One powerful tool for working with asynchronous code is the async for loop, which allows you to iterate through asynchronous iterators while maintaining a non-blocking execution flow. By harnessing the power of async for, you will be able to write high-performing applications that can handle multiple tasks concurrently without being slowed down by blocking operations.

The async for loop is based on the concept of asynchronous iterators, providing a mechanism to traverse through a series of awaitables while retrieving their results without blocking the rest of your program. This distinct feature sets it apart from traditional synchronous loops, and it plays an essential role in making your code concurrent and responsive, handling tasks such as network requests and other I/O-bound operations more efficiently.

To get started with async for in Python, you’ll need to use the async def keyword when creating asynchronous functions, and make use of asynchronous context managers and generators.

When you deal with asynchronous programming in Python, the async keyword plays a crucial role. Asynchronous programming allows your code to handle multiple tasks simultaneously without blocking other tasks. This is particularly useful in scenarios where tasks need to be executed concurrently without waiting for each other to finish.

The async keyword in Python signifies that a function is a coroutine. Coroutines are a way of writing asynchronous code that looks similar to synchronous code, making it easier to understand. With coroutines, you can suspend and resume the execution of a function at specific points, allowing other tasks to run concurrently.

In Python, the async keyword is used in conjunction with the await keyword. While async defines a coroutine function, await is used to call a coroutine and wait for it to complete. When you use the await keyword, the execution of the current coroutine is suspended, and other tasks are allowed to run. Once the await expression completes, the coroutine resumes its execution from where it left off.

๐Ÿ’ก Recommended: Python Async Await: Mastering Concurrent Programming

Here’s an example of how you might use the async and await keywords in your Python code:

import aiohttp
import asyncio async def fetch_url(url): async with aiohttp.ClientSession() as session: async with session.get(url) as response: return await response.text() async def main(): url = "https://www.example.com/" content = await fetch_url(url) print(content) asyncio.run(main())

In this example, fetch_url is a coroutine defined using the async keyword. It makes a request to a specified URL and retrieves the content. The request and response handling is done asynchronously, allowing other tasks to run while waiting for the response. The main coroutine uses await to call fetch_url and waits for it to complete before printing the content.

Async Function and Coroutine Objects

In Python, asynchronous programming relies on coroutine objects to execute code concurrently without blocking the execution flow of your program. You can create coroutine objects by defining asynchronous functions using the async def keyword. Within these async functions, you can use the await keyword to call other asynchronous functions, referred to as async/await syntax.

To begin, define your asynchronous function using the async keyword, followed by def:

async def my_async_function(): # your code here

While working with asynchronous functions, you’ll often encounter situations where you need to call other async functions. To do this, use the await keyword before the function call. This allows your program to wait for the result of the awaited function before moving on to the next line of code:

async def another_async_function(): # your code here async def my_async_function(): result = await another_async_function()

Coroutine objects are created when you call an async function, but the function doesn’t execute immediately. Instead, these coroutines can be scheduled to run concurrently using an event loop provided by the asyncio library. Here’s an example of running a coroutine using asyncio.run():

import asyncio async def my_async_function(): print("Hello, async!") asyncio.run(my_async_function())

Remember that async functions are not meant to be called directly like regular functions. Instead, they should be awaited within another async function or scheduled using an event loop.

By using coroutine objects and the async/await syntax, you can write more efficient, readable, and performant code that manages concurrency and handles I/O bound tasks effectively. Keep in mind that async functions should primarily be used for I/O-bound tasks and not for CPU-bound tasks. For CPU-bound tasks, consider using multi-threading or multi-processing instead.

The Fundamentals of AsyncIO

๐Ÿ’ก AsyncIO is a Python library that provides support for writing asynchronous code utilizing the async and await syntax. It allows you to write concurrent code in a single-threaded environment, which can be more efficient and easier to work with than using multiple threads.

To start using AsyncIO, you need to import asyncio in your Python script. Once imported, the core component of AsyncIO is the event loop. The event loop manages and schedules the execution of coroutines, which are special functions designed to work with asynchronous code. They are defined using the async def syntax.

Creating a coroutine is simple. For instance, hereโ€™s a basic example:

import asyncio async def my_coroutine(): print("Hello AsyncIO!") asyncio.run(my_coroutine())

In this example, my_coroutine is a coroutine that just prints a message. The asyncio.run() function is used to start and run the event loop, which in turn executes the coroutine.

๐Ÿ’ก Coroutines play a crucial role in writing asynchronous code with AsyncIO. Instead of using callbacks or threads, coroutines use the await keyword to temporarily suspend their execution, allowing other tasks to run concurrently. This cooperative multitasking approach lets you write efficient, non-blocking code.

Here is an example showcasing the use of await:

import asyncio async def say_after(delay, message): await asyncio.sleep(delay) print(message) async def main(): await say_after(1, "Hello") await say_after(2, "AsyncIO!") asyncio.run(main()) 

In this example, the say_after coroutine takes two parameters: delay and message. The await asyncio.sleep(delay) line is used to pause the execution of the coroutine for the specified number of seconds. After the pause, the message is printed. The main coroutine is responsible for running two instances of say_after, and the whole script is run via asyncio.run(main()).

Asynchronous For Loop

In Python, you can use the async for statement to iterate asynchronously over items in a collection. It allows you to perform non-blocking iteration, making your code more efficient when handling tasks such as fetching data from APIs or handling user inputs in a graphical user interface.

In order to create an asynchronous iterator, you need to define an object with an __aiter__() method that returns itself, and an __anext__() method which is responsible for providing the next item in the collection.

For example:

class AsyncRange: def __init__(self, start, end): self.start = start self.end = end def __aiter__(self): return self async def __anext__(self): if self.start >= self.end: raise StopAsyncIteration current = self.start self.start += 1 return current

Once you have your asynchronous iterator, you can use the async for loop to iterate over the items in a non-blocking manner. Here is an example showcasing the usage of the AsyncRange iterator:

import asyncio async def main(): async for number in AsyncRange(0, 5): print(number) await asyncio.sleep(1) asyncio.run(main())

In this example, the AsyncRange iterator is used in an async for loop, where each iteration in the loop pauses for one second using the await asyncio.sleep(1) line. Despite the delay, the loop doesn’t block the execution of other tasks because it is asynchronous.

It’s important to remember that the async for, __aiter__(), and __anext__() constructs should be used only in asynchronous contexts, such as in coroutines or with async context managers.

By utilizing the asynchronous for loop, you can write more efficient Python code that takes full advantage of the asynchronous programming paradigm. This comes in handy when dealing with multiple tasks that need to be executed concurrently and in non-blocking ways.

Using Async with Statement

When working with asynchronous programming in Python, you might come across the async with statement. This statement is specifically designed for creating and utilizing asynchronous context managers. Asynchronous context managers are able to suspend execution in their __enter__ and __exit__ methods, providing an effective way to manage resources in a concurrent environment.

To use the async with statement, first, you need to define an asynchronous context manager. This can be done by implementing an __aenter__ and an __aexit__ method in your class, which are the asynchronous counterparts of the synchronous __enter__ and __exit__ methods used in regular context managers.

The __aenter__ method is responsible for entering the asynchronous context, while the __aexit__ method takes care of exiting the context and performing cleanup operations.

Here’s a simple example to illustrate the usage of the async with statement:

import aiohttp
import asyncio async def fetch_data(url): async with aiohttp.ClientSession() as session: async with session.get(url) as response: return await response.text() async def main(): url = "https://example.com" data = await fetch_data(url) print(data) asyncio.run(main())

In this example, we’re using the aiohttp library to fetch the contents of a webpage. By using async with when creating the ClientSession and the session.get contexts, we ensure that resources are effectively managed throughout their lifetime in an asynchronous environment.

Time and Delays in Async

The time and asyncio.sleep functions are essential components of managing time delays.

In asynchronous programming, using time.sleep is not recommended since it can block the entire execution of your script, causing it to become unresponsive. Instead, you should use asyncio.sleep, which is a non-blocking alternative specifically designed for asynchronous tasks.

To implement a time delay in your async function, simply use the await asyncio.sleep(seconds) syntax, replacing seconds with the desired number of seconds for the delay. For example:

import asyncio async def delay_task(): print("Task started") await asyncio.sleep(2) print("Task completed after 2 seconds") asyncio.run(delay_task())

This will cause a 2-second wait between printing “Task started” and “Task completed after 2 seconds” without blocking the overall execution of your script.

Timeouts can also play a significant role in async programming, preventing tasks from taking up too much time or becoming stuck in an infinite loop.

To set a timeout for an async task, you can use the asyncio.wait_for function:

import asyncio async def long_running_task(): await asyncio.sleep(10) return "Task completed after 10 seconds" async def main(): try: result = await asyncio.wait_for(long_running_task(), timeout=5) print(result) except asyncio.TimeoutError: print("Task took too long to complete") asyncio.run(main())

In this example, the long_running_task takes 10 seconds to complete, but we set a timeout of 5 seconds using asyncio.wait_for. When the task exceeds the 5-second limit, an asyncio.TimeoutError is raised, and the message “Task took too long to complete” is printed.

By understanding and utilizing asyncio.sleep and timeouts in your asynchronous programming, you can create efficient and responsive applications in Python.

Concurrency with AsyncIO

AsyncIO is a powerful library in Python that enables you to write concurrent code. By using the async/await syntax, you can create and manage coroutines, which are lightweight functions that can run concurrently in a single thread or event loop. This approach maximizes efficiency and responsiveness in your applications, especially when dealing with I/O-bound operations.

To start, you’ll need to define your coroutines using the async def keyword. This allows you to use the await keyword within the coroutine to yield control back to the event loop, thus enabling other coroutines to run. You can think of coroutines as tasks that run concurrently within the same event loop.

To manage the execution of coroutines, you’ll use the asyncio.create_task() function. This creates a task object linked to the coroutine which is scheduled and run concurrently with other tasks within the event loop. For example:

import asyncio async def my_coroutine(): print("Hello, World!") task = asyncio.create_task(my_coroutine())

To run multiple tasks concurrently, you can use the asyncio.gather() function. This function takes several tasks as arguments and starts them all concurrently. When all tasks are completed, it returns a list of their results:

import asyncio async def task_one(): await asyncio.sleep(1) return "Task one completed" async def task_two(): await asyncio.sleep(2) return "Task two completed" async def main(): results = await asyncio.gather(task_one(), task_two()) print(results) asyncio.run(main())

Another useful function is asyncio.as_completed(). This function returns an asynchronous iterator that yields coroutines in the order they complete. It can be helpful when you want to process the results of coroutines as soon as they are finished, without waiting for all of them to complete:

import asyncio async def my_task(duration): await asyncio.sleep(duration) return f"Task completed in {duration} seconds" async def main(): tasks = [my_task(1), my_task(3), my_task(2)] for coroutine in asyncio.as_completed(tasks): result = await coroutine print(result) asyncio.run(main())

When working with AsyncIO, remember that your coroutines should always be defined using the async keyword, and any function that calls an asynchronous function should also be asynchronous.

Generators, Futures and Transports

In your journey with Python’s async programming, you will come across key concepts like generators, futures, and transports. Understanding these concepts will help you grasp the core principles of asynchronous programming in Python.

Generators are functions that use the yield keyword to produce a sequence of values without computing them all at once. Instead of returning a single value or a list, a generator can be paused at any point in its execution, only to be resumed later. This is especially useful in async programming as it helps manage resources efficiently.

yield from is a construct that allows you to delegate part of a generator’s operations to another generator, ultimately simplifying the code. When using yield from, you include a subgenerator expression, which enables the parent generator to yield values from the subgenerator.

Futures represent the result of a computation that may not have completed yet. In the context of async programming, a future object essentially acts as a placeholder for the eventual outcome of an asynchronous operation. Their main purpose is to enable the interoperation of low-level callback-based code with high-level async/await code. As a best practice, avoid exposing future objects in user-facing APIs.

Transports are low-level constructs responsible for handling the actual I/O operations. They implement the communication protocol details, allowing you to focus on the high-level async/await code. Asyncio transports provide a streamlined way to manage sockets, buffers, and other low-level I/O related tasks.

Frequently Asked Questions

What are the main differences between ‘async for’ and regular ‘for’ loops?

The main difference between async for and regular for loops in Python is that async for allows you to work with asynchronous iterators. This means that you can perform non-blocking I/O operations while iterating, helping to improve your program’s performance and efficiency. Regular for loops are used with synchronous code, where each iteration must complete before the next one begins.

How can async for loop be implemented with list comprehensions?

Unfortunately, async for cannot be directly used with list comprehensions since the syntax does not support asynchronous execution. Instead, when working with asynchronous code, you can use asyncio.gather() alongside a list comprehension to achieve a similar result. This approach allows you to run multiple asynchronous tasks concurrently and collect their results.

For example:

import asyncio async def square(x): await asyncio.sleep(1) return x * x async def main(): numbers = [1, 2, 3, 4, 5] results = await asyncio.gather(*(square(num) for num in numbers)) print(results) asyncio.run(main())

What are common patterns to efficiently use async in Python?

To efficiently use async in Python, you can employ the following patterns:

  1. Use asyncio library features, such as asyncio.gather(), asyncio.sleep(), and event loops.
  2. Write asynchronous functions with the async def syntax and use await to call other asynchronous functions.
  3. Use context managers, such as async with, to handle resources that support asynchronous operations.
  4. Use async for loops when working with asynchronous iterators to keep your code non-blocking.

How can you create an async range in Python?

To create an async range in Python, you can implement an asynchronous iterator with a custom class that adheres to the async iterator protocol. The custom class should define an __aiter__() method to return itself and implement an __anext__() method that raises StopAsyncIteration when the range is exhausted. Here is an example:

import asyncio class AsyncRange: def __init__(self, start, end): self.start = start self.end = end def __aiter__(self): return self async def __anext__(self): if self.start >= self.end: raise StopAsyncIteration current = self.start self.start += 1 await asyncio.sleep(1) return current 

Are there any examples of creating an async iterator?

Here’s an example of creating an async iterator using a custom class:

import asyncio class AsyncCountdown: def __init__(self, count): self.count = count def __aiter__(self): return self async def __anext__(self): if self.count <= 0: raise StopAsyncIteration value = self.count self.count -= 1 await asyncio.sleep(1) return value async def main(): async for value in AsyncCountdown(5): print(value) asyncio.run(main())

What is the correct way to use ‘async while’ in Python?

To use async while in Python, simply place the await keyword before an asynchronous function or expression within the body of the while loop. By doing so, the loop will execute the asynchronous code non-blocking, allowing other tasks to run concurrently. Here’s an example:

import asyncio async def async_while_example(): count = 5 while count > 0: await asyncio.sleep(1) print(count) count -= 1 asyncio.run(async_while_example())

๐Ÿ’ก Recommended: Python Async Function

The post Python Async For: Mastering Asynchronous Iteration in Python appeared first on Be on the Right Side of Change.

Posted on Leave a comment

Python Enum Get Value โ€“ Five Best Methods

4/5 – (1 vote)

This article delves into the diverse methods of extracting values from Python’s Enum class. The Enum class in Python offers a platform to define named constants, known as enumerations.

These enumerations can be accessed through various techniques:

  1. By Name/Key: Access the enumeration directly through its designated name or key.
  2. By String/String Name: Utilize a string representation of the enumeration’s name, often in conjunction with the getattr function.
  3. By Index: Retrieve the enumeration based on its sequential order within the Enum class.
  4. By Variable: Leverage a variable containing the enumeration’s name, typically paired with the getattr function.
  5. By Default: Obtain the initial or default value of the enumeration, essentially the foremost member defined in the Enum class.

This article underscores the adaptability and multifaceted nature of the Enum class in Python, illustrating the myriad ways one can access the values of its constituents.

Method 1: Python Enum Get Value by Name

Problem Formulation: How can you retrieve the value of an Enum member in Python using its name or key?

In Python, the Enum class allows you to define named enumerations. To get the value of an Enum member using its name (=key), you can directly access it as an attribute of the Enum class.

from enum import Enum
class Color(Enum): RED = 1 GREEN = 2 BLUE = 3
print(Color.RED.value)
# 1

Method 2: Python Enum Get Value by String/String Name

Problem Formulation: How can you retrieve the value of an Enum member in Python using a string representation of its name?

You can use the string representation of an Enum member’s name to access its value by employing the getattr() function.

color_name = "RED"
print(getattr(Color, color_name).value)

Method 3: Python Enum Get Value by Index

Problem Formulation: How can you retrieve the value of an Enum member in Python using its index?

Enum members can be accessed by their order using the list() conversion. The index refers to the order in which members are defined.

print(list(Color)[0].value)
# 1

Method 4: Python Enum Get Value by Variable

Problem Formulation: How can you retrieve the value of an Enum member in Python using a variable that represents its name?

Similar to accessing by string, you can use the getattr() function with a variable holding the Enum member’s name.

var_name = "GREEN"
print(getattr(Color, var_name).value)
# 2

Method 5: Python Enum Get Value by Default

Problem Formulation: How can you retrieve the default value (or the first value) of an Enum in Python?

By converting the Enum to a list and accessing the first element, you can retrieve the default or first value of the Enum.

print(list(Color)[0].value)
# 1

๐Ÿ’ก Recommended: Robotaxi Tycoon โ€“ Scale Your Fleet to $1M! A Python Mini Game Made By ChatGPT

The post Python Enum Get Value – Five Best Methods appeared first on Be on the Right Side of Change.

Posted on Leave a comment

Python Code for Getting Historical Weather Data

5/5 – (1 vote)

To get historical weather data in Python, install the Meteostat library using pip install meteostat or run !pip install meteostat with the exclamation mark prefix ! in a Jupyter Notebook.

If you haven’t already, also install the Matplotlib library using pip install matplotlib.

Then, copy the following code into your programming environment and change the highlighted lines to set your own timeframe (start, end) and GPS location:

# Import Meteostat library and dependencies
from datetime import datetime
import matplotlib.pyplot as plt
from meteostat import Point, Daily # Set time period
start = datetime(2023, 1, 1)
end = datetime(2023, 12, 31) # Create Point for Stuttgart, Germany
location = Point(48.787399767583295, 9.205803269767616) # Get daily data for 2023
data = Daily(location, start, end)
data = data.fetch() # Plot line chart including average, minimum and maximum temperature
data.plot(y=['tavg', 'tmin', 'tmax'])
plt.show()

This code fetches and visualizes the average, minimum, and maximum temperatures for Stuttgart, Germany, for the entire year of 2023 using the Meteostat library.

Here’s the output:

Here are the three highlighted lines:

  • start = datetime(2023, 1, 1): This sets the start date to January 1, 2023.
  • end = datetime(2023, 12, 31): This sets the end date to December 31, 2023. Together, these lines define the time period for which we want to fetch the weather data.
  • location = Point(48.787399767583295, 9.205803269767616): This creates a geographical point for Stuttgart, Germany using its latitude and longitude GPS coordinates. The Point class is used to represent a specific location on Earth.

You can use Google Maps to copy the GPS location of your desired location:

You can try it yourself in the interactive Jupyter notebook (Google Colab):

If you want to become a Python master, get free cheat sheets, and coding books, check out the free Finxter email academy with 150,000 coders like you:

The post Python Code for Getting Historical Weather Data appeared first on Be on the Right Side of Change.