Town Hall Questions Q1 2020

May 7, 2020

Because of the COVID-19 crisis, we've taken a different approach for our Town Hall and Q&A. If you missed our Town Hall, you can watch it on our YouTube channel. Below are questions from our community members, and answers from the Storj Labs leadership team. Thanks to all those who submitted questions.

1. Is it possible to have more Nodes (different storage, different IP address) registered under one email address?

Yes. You may operate multiple Nodes under the same or different IP addresses. Each Node must meet the minimum requirements of course, and you need to make sure that each Node has different physical storage connected. You need to use the same wallet address for all of the Nodes that are operated by one person or legal entity. But other than that, nothing precludes operating multiple Storage Nodes.

We'll be simplifying the Node registration steps soon to eliminate the email-based registration. We still want an email from you for notices, but you won't need an email to create a Storage Node identity soon.

2. If we want to create a web app then what will be the workflow? Normal workflow is:
1. The user selects a file from the browser it will get uploaded to the web server first and then to the tardigrade server.
2. In this case, uploading a file from the browser to the server, it will depend on ISP is it right?

Can we use file object for uploading and downloading the file to Tardigrade?

There is a ton of information related to solution architecture for different use cases. Most of the early use cases we're seeing for static object storage are related to backing up on-premise data to the cloud, hybrid cloud storage for on premise or mobile applications.

We have a number of early web application use cases using an internet accessible S3 gateway. There are important considerations for how this architecture is implemented to make the most efficient and economical use of bandwidth.

At the present time, we don't have a native browser-based implementation. One of the things we haven't done yet is to implement a way for a browser to trust the Storage Nodes from which pieces of files are directly downloaded peer-to-peer.

In the future, we absolutely intend to solve that problem, but we're not there yet.

To reiterate - we don't currently recommend Storj for direct-to-browser web hosting, though we plan to tackle that when the time allows.

3. Why is it so complicated to just get Hello Storj running using any language library?

With the production release of the Tardigrade Platform, we released the 1.0 of our developer library Libuplink. We made some additional changes to the C binding and then released updates to the other language bindings. We've made significant improvements to the tooling over the last few months and even the last few weeks.

Over the next few weeks we're also making improvements to the documentation to make the bindings easier to use.

We're also making the documentation more clear to delineate between Storj tools and community-sourced tools.

4. How will the supply play a role if there is a sudden spike in consumption?

One of the lessons we learned operating the second generation of the network was that onboarding a large amount of capacity has historically not been an issue. That's the primary reason we're much more focused on generating demand.

We have a number of dials we can turn to rapidly activate supply and aim to always have enough supply for the next 3-6 months or more. We've had a lot of interest from commercial Storage Node Operators (businesses and regional data centers, for example). Many of these partners are able to activate large amounts of storage if the demand is there.

In addition, we can use techniques like surge payouts to incentivize new Storage Nodes to add additional supply.

That said, fundamentally this is the reason per-project usage limits exist, just as they do in Amazon. If someone suddenly needed to store 1EB of data, project limits ensure we'd have a heads up to ramp up our storage capacity using the dials mentioned.

5. Why are the old Nodes that have lived on the network for a year considered new on the new Satellites?

Each Satellite individually vets all of the Storage Nodes with which it works. This function is an artifact of the decentralized nature of the network. Tardigrade Satellites operated by Storj Labs behave the same as community-run Satellites.

6. How will SLA be put across for Large customers or enterprises?

Our SLA is published in our Tardigrade Terms of Service. We're able to maintain our SLA for availability based on the scalability of the components operated by Storj Labs as well as our expertise in operating our software.

The automated functions such as Storage Node audit, uptime checks and file repair allow us to ensure the durability of data stored on the Tardigrade Platform.

7. If there is an ERC20 compliance issue in any region, how do you deal with it? For suppliers & consumers?

I believe the intent of this question is related to how Storage Node Operators and customers can use the STORJ token on the network when they are located in jurisdictions with different regulatory frameworks and perspectives on the use of tokens.

First, customers are able to purchase cloud storage and bandwidth using either a credit card or the STORJ utility token. If a customer finds the use of STORJ tokens to be an issue, credit cards are an easy alternative. However, paying in STORJ token rewards you with a bonus on every deposit, incentivizing use of the token.

With regard to Storage Node Operators, under the Storage Node Operator Terms and Conditions, Storage Node Operators are responsible for ensuring the operation of their Nodes and, by extension, the receipt of STORJ utility tokens comply with local regulations.

8. If Storj keeps 3 redundant copies of a file/shard then how does the supplier gets paid or affected and how does it affect the consumers (in simple words who bares the cost)?

First, it's important to clarify that the network doesn't store three redundant copies of a file on the network we've written extensively on the difference between replication and erasure coding. We use only erasure coded data for redundancy and don't use replication.

With erasure codes, each file is broken into a minimum of 80 pieces, of which only 29 (any 29) pieces are required to recover the file. The erasure coding redundancy results in an expansion factor of 276%. What that means is that a 1GB file uploaded by a customer is actually stored as 2.76GB.

The cost structure for this redundancy is built into the pricing model. Satellite operators can structure their price/cost model however they want, but Tardigrade Satellites pay Storage Node Operators at a different rate than customers are charged to store data on the network.

Overall, our goal is to provide an excellent service to our customers that offers comparable availability and durability to other cloud services with superior security, privacy, and economics. We can do that because we don't have to build and operate expensive data centers. Instead, we fairly compensate Storage Node Operators for the storage and bandwidth used by the network. Between the different fees we pay to Storage Nodes for storage, bandwidth, data repair and audit bandwidth, our goal is to pay out about $0.60 of every dollar to the network.

9. As of now it's only possible to upload files one by one. Will there be an option to upload files in bulk or upload directories with a lot of files?

It's true that our current Uplink CLI is missing a recursive copy feature, which is on our roadmap and we plan to add soon. If you'd like to beat us to it we'd happily accept a source contribution to add recursive copy! It's a small task but would be a great OSS contribution. That said, we have a number of other options available for storing bulk data. Using the S3 Gateway, any AWS S3 tool can do the trick (including the AWS S3 CLI). We also have RClone integration, an upcoming FileZilla integration, and a number of other integrations. While the Uplink client uploads one file at a time in serial order, the tool can be used to upload or download entire directories to or from the Tardigrade Platform. In addition to our CLI, S3 Compatible Gateway, and Libuplink developer library, we've also got other tools to make moving data easy.

Whether you want to use a GUI like FileZilla, or use Rclone with a number of great tools to sync directories and make directory listings more efficient, or just make managing cloud backups easier with Restic, we've got options to make bulk operations easier to manage.

10. "Files will be stored through 2019". How do I store a file through 2030?

The Tardigrade Platform is an enterprise grade cloud storage service. Files stored on the platform persist until deleted. If you want your files to persist until 2030, just don't delete them or set a TTL (Time To Live) on your files for some time in 2030 when they'll be deleted when the TTL expires.

11. We would like to know when it will be possible to operate our own Satellites. We would like to build up our own network and have the Storage Nodes as well as the corresponding storage requirements. I would be very happy about feedback.

It is possible to run your own Satellite with your own Storage Nodes today! In fact, many of our developers do exactly this during development, debugging, and testing.

If you want your Satellite to have the ability to use our wider network of Storage Nodes, that is also possible today, but the user experience for Storage Node Operators to join your Satellite doesn't yet meet our quality standards, so we haven't yet documented or made the experience particularly robust.

Suffice it to say, we're planning to release an improved user interface for community Satellites with documentation as soon as possible, but we don't want to rush it and cause a poor Storage Node Operator experience with community Satellites.

12. I have been waiting for years to use Storj, where my time to upload terabytes of stuff happens as soon as it is possible to have a shared drive on windows that is my Storj drive. I got stuck on FileZilla setup and even if I could get it working, it is not a user friendly solution.

What are your plans for expanding accessibility to regular home users in both your website and with client side software that emulates a shared drive?

Traditional, centralized cloud storage went through a number of iterations before fully accessible cloud storage was in the hands of regular home users. The first problem cloud storage had to solve was the general problem of storage, solved wonderfully well by Amazon's first AWS offering, S3. S3 is a cloud object storage tool for developers, and not end users, and it was due to this and other reasons that the Dropbox team initially built Dropbox on top of S3.

At Storj, our top priority is to bring decentralized storage to the world, and we believe the most effective and sustainable path both technologically and financially is the same route

Share this blog post

Put Storj to the test.

It’s simple to set up and start using Storj. Sign up now to get 25GB free for 30 days.
Start your trial