Ive been looking around for a way to design a database for large amounts of leads (users) with custom fields.
Ive check this out (How would you design a user database with custom fields), but this solution would limit the amount of custom fields.
So far ive been designing it in three tables:
leads (ID, phone, email) leads_fields (ID, name, type, required) leads_fields_content (ID, fields_id, lead_id, content)
Users can create all the
leads_fields they need, fx. 20 fields.
So when I have a lead, I would go trough and check in
lead_id, get that collection and get the corresponding
leads_fields that is referenced.
I see this working - However, one client wants to upload 300.000 leads from day one. So thats 300.000 rows in
leads. Then lets say that there are 20 fields for each lead. That would then create 300.000*20 rows in
leads_fields_content which is 6.000.000 rows. Thats only for one client.
My question: is this at all the correct way of designing this, taking into account the amount of time it would take to go trough 300.000 rows, and then 6.000.000 afterwards? And this would only get exponentially bigger.