Skip to main content

OhMyScrapper scrapes texts and urls looking for links and jobs-data to create a final report with general information about job positions.

Project description

🐶 OhMyScrapper - v0.9.5

OhMyScrapper scrapes texts and urls looking for links and jobs-data to create a final report with general information about job positions.

Scope

  • Read texts;
  • Extract and load urls;
  • Scrapes the urls looking for og:tags and titles;
  • Export a list of links with relevant information;

Installation

You can install directly in your pip:

pip install ohmyscrapper

I recomend to use the uv, so you can just use the command bellow and everything is installed:

uv add ohmyscrapper
uv run ohmyscrapper --version

But you can use everything as a tool, for example:

uvx ohmyscrapper --version

How to use and test (development only)

OhMyScrapper works in 3 stages:

  1. It collects and loads urls from a text in a database;
  2. It scraps/access the collected urls and read what is relevant. If it finds new urls, they are collected as well;
  3. Export a list of urls in CSV files;

You can do 3 stages with the command:

ohmyscrapper start

Remember to add your text file in the folder /input with the name that finishes with .txt!

You will find the exported files in the folder /output like this:

  • /output/report.csv
  • /output/report.csv-preview.html
  • /output/urls-simplified.csv
  • /output/urls-simplified.csv-preview.html
  • /output/urls.csv
  • /output/urls.csv-preview.html

BUT: if you want to do step by step, here it is:

First we load a text file you would like to look for urls. It it works with any txt file.

The default folder is /input. Put one or more text (finished with .txt) files in this folder and use the command load:

ohmyscrapper load

or, if you have another file in a different folder, just use the argument -input like this:

ohmyscrapper load -input=my-text-file.txt

In this case, you can add an url directly to the database, like this:

ohmyscrapper load -input=https://cesarcardoso.cc/

That will append the last url in the database to be scraped.

That will create a database if it doesn't exist and store every url the oh-my-scrapper find. After that, let's scrap the urls with the command scrap-urls:

ohmyscrapper scrap-urls --recursive --ignore-type

That will scrap only the linkedin urls we are interested in. For now they are:

  • linkedin_post: https://%.linkedin.com/posts/%
  • linkedin_redirect: https://lnkd.in/%
  • linkedin_job: https://%.linkedin.com/jobs/view/%
  • linkedin_feed" https://%.linkedin.com/feed/%
  • linkedin_company: https://%.linkedin.com/company/%

But we can use every other one generically using the argument --ignore-type:

ohmyscrapper scrap-urls --ignore-type

And we can ask to make it recursively adding the argument --recursive:

ohmyscrapper scrap-urls --recursive

!!! important: we are not sure about blocks we can have for excess of requests

And we can finally export with the command:

ohmyscrapper export
ohmyscrapper export --file=output/urls-simplified.csv --simplify
ohmyscrapper report

That's the basic usage! But you can understand more using the help:

ohmyscrapper --help

See Also

License

This package is distributed under the MIT license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ohmyscrapper-0.9.5.tar.gz (16.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ohmyscrapper-0.9.5-py3-none-any.whl (22.9 kB view details)

Uploaded Python 3

File details

Details for the file ohmyscrapper-0.9.5.tar.gz.

File metadata

  • Download URL: ohmyscrapper-0.9.5.tar.gz
  • Upload date:
  • Size: 16.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for ohmyscrapper-0.9.5.tar.gz
Algorithm Hash digest
SHA256 bc2e5391febd177e887091c3f4a0a110d52079cf2a9c29128575f4fc2ffc1490
MD5 b5a7eb19b921645a0bcc20b169c4d6ae
BLAKE2b-256 dab4b79d8f6f3d4508e07ef7b61d9afe4a2e0c8d05153d0bd8896621688ba4be

See more details on using hashes here.

File details

Details for the file ohmyscrapper-0.9.5-py3-none-any.whl.

File metadata

  • Download URL: ohmyscrapper-0.9.5-py3-none-any.whl
  • Upload date:
  • Size: 22.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for ohmyscrapper-0.9.5-py3-none-any.whl
Algorithm Hash digest
SHA256 fc9a2e44b8e165b1f6296aee5f3b8c0a220d495c3e28e32c115b2b5dd0fa2649
MD5 e97a74723c04cf441b7cb807346cbd76
BLAKE2b-256 91c6acc8f7842b60a5430b6c37282f9946616daabd32d539aaa760a4338823c4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page