How many records can filemaker hold




















Divide your table into many tables and build relationship between each table. You should read free FileMaker Training Series. This site contains user submitted content, comments and opinions and is for informational purposes only. Claris may provide or recommend responses as a possible solution based on the information provided; every potential issue may involve several factors not detailed in the conversations captured in an electronic forum and Claris can therefore provide no guarantee as to the efficiency of any proposed solutions on the community forums.

Claris disclaims any and all liability for the acts, omissions and conduct of any third parties in connection with or related to your use of the site. All postings and use of the content on this site are subject to the Claris Community Use Agreement.

Search Loading. Register Login. View This Post. However, a stored calculation will only update when one of the feeders changes.

Otherwise, it doesn't impose a penalty. Hence they are again, in general more efficient than unstored calculations because the conditions that cause them to update occur less frequently. It's especially true since many unstored calculations are unstored because they reference related data and therefore have to be evaluated at runtime, cannot be indexed, and lead to people searching on them which is a performance killer. Again, this can be true, but it's the exception.

General design practice including virtually all performance sessions at the last several DevCons is to stick with narrow tables - or even to split out infrequently updated fields into separate tables to minimize cache refreshes on fields that don't need it.

I can remember this as a best practice as far back as The first one CAN be true in some cases. The second is conditionally true. If you don't, then the records don't have to be fetched because the query is executed on the server and only the result is returned. If you do, then all the records involved in the query have to be fetched because they have to be compared with the local values. The results are cached, so subsequent queries will be faster.

I never said ExecuteSQL was "always" less efficient. I said you shouldn't assume it was a performance booster, and that it was slower in many cases. You've paid that price previously. These are principles to mind. They aren't guarantees. Yes, you have to make design decisions. Yes, there are tradeoffs. Doesn't mean there's no such thing as "best practices". Exceptions don't disprove rules. Lots of good responses. It's rare, but when a server malfunctions and a file is damaged, it's usually the largest one that takes the hit.

It's worth spending some time on your structure to see about isolating your largest data tables into their own file. One client had their largest file - the General Ledger - go down on the server. They didn't even know it for two days, when they tried to run a GL report.

It took us another day of investigation and recovery to implement a clone. The rest of the system was up during all of that time, and no user had significant downtime. As the record count goes up, the index of a single field can take hundreds of megabytes; even gigs. As developers we often go into the background and perform finds on fields that aren't relevant to users, forcing an index that isn't needed. We often create "audit" calcs to test integrity of the system finding dups, etc.

These generally need to be stored and indexed while we work with them, but no user touches these fields, so when we're done, we turn them back to "unstored" and eventually recover the disk space. If we need them again, they are already in the system, but since they are unstored, they take virtually no resources…until needed the next time. While accessing large data sets with unstored calcs can bring your system to a screaming halt, there are often cases where the interface naturally interacts with only small record counts, or reports that are run very, very infrequently.

In these cases unstored calcs can appropriately reduce your file size. Regular business rules or server side scripts can keep the summary tables in sync. Rules made by others and thought to be the holy grail of database design can be ignored if they don't work for you. Case in point: on one list a newbie asked for help in designing a report with photographs as this was assigned to him by an employer who gave him a new box of FileMaker and told him to create a file and go photograph his property.

Sadly he asked this question on a list with people who were eager to provide a post but not really able to act as a consultant and provide a good answer. The replies ranged from SQL, web browsers, Java, etc. Six weeks later the newbie was lost, had not created the report, etc. Eventually I responded that the report could have been created in five minutes using one table and the employer would have been quite happy.

The report could be made hierarchical responding to floor, apartment, room, fixture, etc. One table. If you understand FileMaker and leverage that knowledge. That table would have many many fields and each record would contain redundante data in order to make the report breaks but it would work, it would answer the problem and it would get the employee a raise for being so productive.

Instead, the list answers ended up making him look incompetent. There are different needs and thus arbitrary rules don't always work. Maximizing the speed of a megalithic corporate database is one thing while maximizing the speed of getting the job done for a small firm is quite another. Isn't it getting just a wee bit tiresome telling the same story time and again?

Your responses are appreciated. Of course, we're drifting slightly off the original topic performance and into areas of database size and stability. These are very important maybe even moreso than performance.

On the unstored calc issue: Why is it whenever you point out the performance weaknesses of unstored calcs, the defenders of unstored calcs come out of the woodwork? Yes, of course, you can reduce file size by removing indexing which includes unstored calculations. The point being, within the context of performance which means: user interaction , they should be minimized or avoided.

I don't believe best practices are "arbitrary rules". They're the results of years of experimentation and real-world experience on the part of the community. A big part of my day job is inheriting databases built by "citizen developers" - people with little or no training in database theory, programming, etc. They do what you suggest: Slap something together that "just works".

If you're building something just for your own use, or for a short-term throwaway purpose, then fine, ignore the wisdom of hundreds or thousands of experienced people. But I'm not about to take that risk with something that's going into production for business use. I agree with Mike: if somebody's flexibility extends to ignoring basic principles, then it means he's got no principles at all, and that's not good.

In other words, even being capable of thinking about it is imho wrong and counterproductive. The windows you leave open will slam at the first wind, hitting you with more work to do and hitting your reputation too. I don't know if those are causing some type of issue or not. There are currently 74 tables in this database. I don't know if this is a lot or not. The table in question Showlist is being used as the main source to collect data from the rest of the tables.

I've been looking it over again and it seems there are a lot of fields in this table which are duplicating information from other tables simply to have it in one place. As for the layout fields The reference is the date when the show starts. This allows them to quickly get a summary of what's going on this week and weeks about a year into the future. This page is where the majority of the slowdown occurs as it is based on the Showlist table.

It uses about 37 fields in the body and 12 in the sub-summary along with 2 buttons. Again, I don't know if this is excessive or not but it sure is difficult to read. I also don't know if this contributes to the slow down or not. That's impossible to answer without knowing in detail what the solution needs to do and track. Absolute numbers without context don't say much. Thanks for those links!

I'm looking forward to digging into those. No, the Data Migration tool works with any. Thank you! It sounds like you understand enough of it to trim-the-fat. But as you look at the layouts and how the data is used, you'll see what can then be scripted as part of that process and what needs to remain as calculations! I just want to say thanks to everybody for being so helpful and supportive! This has been an excellent learning experience. I'm sure I will have many questions in the future and look forward to all the different viewpoints coming together to give me several options on solutions.

In addition to changing calculation fields to stored auto-calc fields, there are other things you should consider. Here are some oft referred to sources of great information:.

Ryan, regarding your question about "if someone changes the base information? Auto-calc fields are "triggered" by either a record creation or by a change in any of their input fields. When defining an auto-calc field, there is a checkbox option to "do not replace existing value of field if any ".

When checked, this causes the auto-calc to retain its initial value once it has triggered and to not change again set and forget. If the checkbox is not checked, then the field will recalculate each time one of its inputs changes. For instance, a number field in a line-item named "Amount" is defined as an auto-calc of:. Each time you put or change a value in either Price or Quantity, Amount will auto-calc yet again. However, if an auto-calc is based on fields in a related table and the value of the field in the related table changes, the auto-calc will not automatically recalculate when the related field value changes.

To build on the former example, say:. After entering a line item the values for Price, Quantity and Amount are now set as stored values.

The salesperson goes to the Client record and updates the Discount rate from 0. Any line items that already exist will not change their value for Amount as a result of the change in the Client record's Discount rate.

This is a good thing, because you don't want all existing line-items for this customer to change retro-actively when the Client's discount rate is modified - you want it to only apply to new future line items for that Client. In data situations where you do want a changed value in a related table to apply, you have to use some other update mechanism to trigger any related auto-calcs to update, perhaps with a script or by going to a list layout for the line-items table, finding all line items for that client, and then using the "Replace Field Contents This will trigger the update of Amount since the triggering event is the change in the value of Quantity even though the change was to itself and not some new value - but since the auto-calc re-fetches the current Discount value from the Client table, your line items will now reflect the new Discount rate.

Correction following removed per Wim's comment below To take advantage of it you would need to convince your client to move the database to a FMS17 installation. This feature enables you to work on an off-line copy of the database, or even on a hosted copy of the database under a different name.

You can then make changes that would be very risky in the production database and see if they work for something really risky, drop a backup before you try it and if things break badly, go back one version. And since Data Migration uses the internal ids for FileMaker objects like Fields and Tables rather than their human-visible names, you can change these names in your dev copy and still automatically no import mapping to get right restore the data from the production file once you have thoroughly tested the new version's added functionality and re-structuring.

After working in live databases for two decades, I find this new feature to be little short of a minor miracle. Yes, a script trigger on the fields in other tables could be used to run a script that does the related updates. Another method I've used is to employ an "On Modification TimeStamp" field in an auto-calc field, something like this:. This way, the auto-calc triggers every time the record is changed in any way since the On Modification TimeStamp field will update every time anything in the record changes and the change of the TimeStamp field now triggers the auto-calc.

It requires some thinking to make sure this is adequate for your purposes. This site contains user submitted content, comments and opinions and is for informational purposes only. Claris may provide or recommend responses as a possible solution based on the information provided; every potential issue may involve several factors not detailed in the conversations captured in an electronic forum and Claris can therefore provide no guarantee as to the efficiency of any proposed solutions on the community forums.

Claris disclaims any and all liability for the acts, omissions and conduct of any third parties in connection with or related to your use of the site. All postings and use of the content on this site are subject to the Claris Community Use Agreement. Search Loading. Register Login. Claris FileMaker. View This Post. February 21, at PM. How many fields in a table is too many?

Maybe this one a few days ago? A good clue to too many fields is Phone1, Phone2, Phone3, Phone4 … Stephen — "People who enjoy meetings should not be in charge of anything. Beverly Sent from miPhone. HTH, BEverly. Ryan, You have a huge task ahead of you. You need to open "thread page" like as , replying in inbox doesn't have the link.

I don't know if this is a lot or not That's impossible to answer without knowing in detail what the solution needs to do and track.



0コメント

  • 1000 / 1000