How do I get access to the data?
First, register on the Nightingale OS Platform. When you register, you enter a queue for access to the data. We will grant access to researchers incrementally, starting in late December and picking up speed in January.
Before granting access, we’ll ask you to provide the following:
Contact information for a person at an academic institution who can verify your non-commercial use case for Nightingale OS
Certificate of completion for a suitable training program in human research subject protections, such as the Collaborative Institutional Training Initiative (CITI) Program’s Data or Specimens Only Research course
After I get admitted to the platform, what then?
You can use free Nightingale cpu.xsmall instances to explore data in a Python JupyterLab environment. Larger instances and GPUs are coming soon.
Do I have to pay to use Nightingale OS?
Nightingale cpu.xsmall instances are free. Larger instances will require you to deposit funds. If you need someone else to fund your account, that’s ok. You can generate an invoice and email it to the appropriate person.
How much money is Nightingale making from computing?
As a nonprofit, we are providing computing resources at as close to cost as we can. For researchers who need additional help, we are working with our funders to provide scholarships that ensure open, equitable access to use Nightingale data. Please tell us if you need assistance.
Can I install Python packages?
Yes. You won’t have access to the internet, but you do have access to our PyPI mirror for pip installs. You won’t have access to conda at this time.
Are TensorFlow and PyTorch set up for me?
Yes, of course. 😉
Can I use a different programming language?
Not yet. We support Python because it satisfies the largest number of users. We can support diverse computing environments it the future.
Can I use my local editor?
No, only a browser-based IDE. We do plan to support additional browser-based editors in the future.
What instance sizes are available?
For our launch we are offering free cpu.small instances.
Beginning in January 2022, we plan to offer larger CPU-only instances with up to 30 CPUs and 250 GB of memory. Once you’ve funded a project, you will be able to take advantage of cpu.small, cpu.medium, cpu.large, and cpu.xlarge instances.
We plan to introduce gpu.small instances (1 GPU) in roughly late January, although the timeline will depend on our user metrics and GPU sourcing.
Can I download Nightingale data and work with in it my own environment?
No, our Terms of Service specify that Nightingale data must remain inside of the Nightingale environment. You can’t download and store it anywhere else.
Can I download my own results derived from the data?
Yes. It is your responsibility to ensure whatever you remove from the Nightingale environment is consistent with the Terms of Service. We monitor all network traffic. We will investigate all suspicious activity and aggressively pursue violations to the full extent of the law.
Can I upload my own data or pre-trained models?
Can I access resources on the internet?
No. If you need something, please contact us.
How do I collaborate with other people?
Everything in Nightingale OS sits within a project. You can invite other registered users to join your projects.
Your instances are private, and your
$HOME directory is too. But projects are made for collaboration. Each project gets a unique storage volume that you can access at
$HOME/project. Every instance in a project gets the same
project directory. You decide what to share with other project members (via the
project directory) and what to keep private.
Can we use GitHub, GitLab, etc.?
No. You can use the shared storage in your projects to create shared Git repositories, but we do not offer access to public repos at this time.
Beginning in January 2022, you will be able to do more data-intensive tasks using larger CPU-only instances with up to 30 CPUs and 250 GB of memory. You will be able to generate an invoice that you can pay or forward to someone else to fund your projects. Once you’ve funded a project, you will be able to take advantage of new cpu.small, cpu.medium, cpu.large, and cpu.xlarge instances.
We plan to introduce gpu.small instances in roughly mid- to late- January, giving you 1 GPU with 32 GB of memory, 12 CPUs and 96 GB of system memory. These GPU instances will be first-come first-serve.
Expect GPU instances to be scarce in the first week. We’ll increase our capacity as quickly as we can as we measure demand across the platform and work with our providers to secure appropriate resources. Our goal is to provide near 100% availability of gpu.small instances in a few weeks after they are introduced.