Search Results

Search found 6017 results on 241 pages for 'universal records managem'.

Page 202/241 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • MVC and repository pattern data effeciency

    - by Shawn Mclean
    My project is structured as follows: DAL public IQueryable<Post> GetPosts() { var posts = from p in context.Post select p; return posts; } Service public IList<Post> GetPosts() { var posts = repository.GetPosts().ToList(); return posts; } //Returns a list of the latest feeds, restricted by the count. public IList<PostFeed> GetPostFeeds(int latestCount) { List<Post> post - GetPosts(); //CODE TO CREATE FEEDS HERE return feeds; } Lets say the GetPostFeeds(5) is supposed to return the 5 latest feeds. By going up the list, doesn't it pull down every single post from the database from GetPosts(), just to extract 5 from it? If each post is say 5kb from the database, and there is 1 million records. Wont that be 5GB of ram being used per call to GetPostFeeds()? Is this the way it happens? Should I go back to my DAL and write queries that return only what I need?

    Read the article

  • Multiple Out-of-Browser Applications in One Application

    - by Otaku
    I'm looking at a scenario where I need to create a single "master" Silverlight application and then add "child" applications for an out-of-browser Silverlight application. The scenario is something like this. A user will visit a gameboard web site and choose a game to play. Let's call it Checkers. He likes it, so then he installs the out-of-browser app to his desktop. He then finds Chess, and installs that too. For both games, while played on the site, he has stats (games played, win/loss records, etc.). For each game on the site, he navigates to a different page. But now he wants to play offline and view his stats and other cross-games information. He wants to have a single app to launch to play either game. From his single out-of-browser app, he sees that Go is also available, and he places a checkmark against it to download on his next connection. Does anyone have any experience at developing multiple out-of-browser Silverlight apps that reside within a single master app? What considerations need to be had for this type of design? How would this work in terms of install experience from different web pages?

    Read the article

  • WPF DataGrid inside Accordion height issue

    - by LucasS
    I am using the latest WPF Toolkit but am running into a height issue when I have a large record set bound into a datagrid inside an AccordionItem item. The height of the Accordion itself scales nicely but the datagrid inside the accordion control doesn't get get a scrollbar or get constrained in any way so the records are hidden. I know that I am most probably missing something very simple (like a binding from the datagrid's height property to the Accordion but that seems messy) here is a cut down version of the code (and yes, this has the same problem if you bind in a big recordset) ... </layouttoolkit:AccordionItem> <layouttoolkit:AccordionItem Header="grid 2"> <dg:DataGrid AutoGenerateColumns="False" CanUserAddRows="False" CanUserDeleteRows="False" SelectionMode="Single"> ... </dg:DataGrid.Columns> </dg:DataGrid> </layouttoolkit:AccordionItem> <layouttoolkit:AccordionItem Header="grid 3"> <dg:DataGrid AutoGenerateColumns="False" CanUserAddRows="False" CanUserDeleteRows="False" SelectionMode="Single"> ... </dg:DataGrid.Columns> </dg:DataGrid> </layouttoolkit:AccordionItem> </layouttoolkit:Accordion> </UserControl>

    Read the article

  • Lost Update Anomaly in Sql Server Update Command

    - by Javed
    Hi, I am very much confused. I have a transaction in ReadCommitted Isolation level. Among other things I am also updating a counter value in it, something similar to below: Update tblCount set counter = counter + 1 My application is a desktop application and this transaction happens to occur quite frequently and concurrently. We recently noticed an error that sometimes the counter value doesn't get updated or is missed. We also insert one record on each counter update so we are sure that records have been inserted but somehow counter fails to update. This happens once in 2000 simulaneous transactions. I seriously doubt it is a lost update anomaly I am facing but if you look at the command above, it's just update the counter from its own value: if I have started a transaction and the transaction has reached this statement, it should have locked the row. This should not cause lost update, but it's happening somehow. Is the thing that this update command works in two parts? Like first it reads the counter value (during which it doesn't get the exclusive lock) and then writes the new calculated value (when it does get an exclusive lock)? Please help, I have got really confused.

    Read the article

  • Using for or while loops

    - by Gary
    Every month, 4 or 5 text files are created. The data in the files is pulled into MS Access and used in a mailmerge. Each file contains a header. This is an example: HEADER|0000000130|0000527350|0000171250|0000058000|0000756600|0000814753|0000819455|100106 The 2nd field is the number of records contained in the file (excluding the header line). The last field is the date in the form yymmdd. Using gawk (for Windows), I've done ok with rearranging/modifying the data and writing it all out to a new file for importing into Access except for the following. I'm trying to create a unique ID number for each record. The ID number has the form 1mmddyyXXXX, where XXXX is a number, padded with leading zeros. Using the header above, the first record in the output file would get the ID number 10106100001 and the last record would get the ID 10106100130. I've tried putting the second field in the header into a variable, rearranging the last header field into the required date format and then looping with "for" statements to append the XXXX part of the ID and then outputting it all with printf but so far I've been complete rubbish at it. Thanks for your help! gary

    Read the article

  • Cannot enlist Synchronization. LocalTransactionCoordinator is completing or completed issue with hib

    - by Bijendra Singh
    I am getting Cannot enlist Synchronization. LocalTransactionCoordinator is completing or completed exception when integrating my method is called from the portlet. I am using spring transaction management for handling all the hibernate transaction in the spring configuration file through AOP. When I run my hibernate dao method for persisting the data through Junit its working fine. Exception description: I am facing an issue that is when I run my code through unit test case data is getting updated in database properly but when I run the same code with integration with portlet my code is executing finely but after the completion of transaction the records is not getting updated to database. The following error can be seen in the log which is [4/7/10 23:06:38:685 MDT] 0000006c LocalTranCoor E WLTC0014E: Cannot enlist Synchronization. LocalTransactionContainment is completing or completed. [4/7/10 23:06:38:689 MDT] 0000006c LocalTransact E J2CA0026E: Method addSync caught java.lang.IllegalStateException: Cannot enlist Synchronization. LocalTransactionCoordinator is completing or completed. at com.ibm.ws.LocalTransaction.LocalTranCoordImpl.enlistSynchronization(LocalTranCoordImpl.java(Compiled Code)) at com.ibm.ejs.j2c.LocalTransactionWrapper.addSync(LocalTransactionWrapper.java(Compiled Code)) at com.ibm.ejs.j2c.ConnectionManager.initializeForUOW(ConnectionManager.java(Compiled Code)) at com.ibm.ejs.j2c.ConnectionManager.involveMCInTran(ConnectionManager.java(Compiled Code)) at com.ibm.ejs.j2c.ConnectionManager.associateConnection(ConnectionManager.java(Compiled Code)) at com.ibm.ejs.j2c.ConnectionManager.associateConnection(ConnectionManager.java(Compiled Code)) at com.ibm.ws.rsadapter.jdbc.WSJdbcConnection.reactivate(WSJdbcConnection.java(Compiled Code)) at com.ibm.ws.rsadapter.jdbc.WSJdbcConnection.getWarnings(WSJdbcConnection.java:1539)

    Read the article

  • EAV Database Sheme

    - by GLO
    Hello Stackoverflow comunity! I believe that my question has to do with all db guru here! Do you know the EAV DB Scheme ( http://en.wikipedia.org/wiki/Entity-attribute-value_model ) and what they say about the performing of this model. I wonder, If I break this model into smaller tables what the result is? Let's talk about it. I have a db with more that 100K records. A lot of categories and many items ( with different properties per category ) Everything is stored in a EAV. If I try to break this scheme and create for any category a unique table is something that will I have to avoid? Yes, I know that probably I'll have a lot of tables and I'll need to ALTER them if I want to add an extra field, BUT is this so wrong? I have also read that as many tables I have, the db will be populate with more files and this isn't good for any filesystem. Any suggestion? Thank you!

    Read the article

  • How to modernize an enormous legacy database?

    - by smayers81
    I have a question, just looking for suggestions here. So, my application is 'modernizing' a desktop application by converting it to the web, with an ICEFaces UI and server side written in Java. However, they are keeping around the same Oracle database, which at current count has about 700-900 tables and probably a billion total records in the tables. Some individual tables have 250 million rows, many have over 25 million. Needless to say, the database is not scaling well. As a result, the performance of the application is looking to be abysmal. The architects / decision makers-that-be have all either refused or are unwilling to restructure the persistence. So, basically we are putting a fresh coat of paint on a functional desktop application that currently serves most user needs and does so with relative ease and quick performance. I am having trouble sleeping at night thinking of how poorly this application is going to perform and how difficult it is going to be for everyday users to do their job. So, my question is, what options do I have to mitigate this impending disaster? Is there some type of intermediate layer I can put in between the database and the Java code to speed up performance while at the same time keeping the database structure intact? Caching is obviously an option, but I don't see that as being a cure-all. Is it possible to layer a NoSQL DB in between or something?

    Read the article

  • MVC and repository pattern data efficiency

    - by Shawn Mclean
    My project is structured as follows: DAL public IQueryable<Post> GetPosts() { var posts = from p in context.Post select p; return posts; } Service public IList<Post> GetPosts() { var posts = repository.GetPosts().ToList(); return posts; } //Returns a list of the latest feeds, restricted by the count. public IList<PostFeed> GetPostFeeds(int latestCount) { List<Post> post - GetPosts(); //CODE TO CREATE FEEDS HERE return feeds; } Lets say the GetPostFeeds(5) is supposed to return the 5 latest feeds. By going up the list, doesn't it pull down every single post from the database from GetPosts(), just to extract 5 from it? If each post is say 5kb from the database, and there is 1 million records. Wont that be 5GB of ram being used per call to GetPostFeeds()? Is this the way it happens? Should I go back to my DAL and write queries that return only what I need?

    Read the article

  • MVC 2.0 - JqGrid Sorting with Mulitple Tables

    - by Billy Logan
    I am in the process of implementing the jqGrid and would like to be able to use the sorting functionality. I have run into some issues with sorting columns that are related to the base table. Here is the script to load the grid: public JsonResult GetData(GridSettings grid) { try { using (IWE dataContext = new IWE()) { var query = dataContext.LKTYPE.Include("VWEPICORCATEGORY").AsQueryable(); ////sorting query = query.OrderBy<LKTYPE>(grid.SortColumn, grid.SortOrder); //count var count = query.Count(); //paging var data = query.Skip((grid.PageIndex - 1) * grid.PageSize).Take(grid.PageSize).ToArray(); //converting in grid format var result = new { total = (int)Math.Ceiling((double)count / grid.PageSize), page = grid.PageIndex, records = count, rows = (from host in data select new { TYPE_ID = host.TYPE_ID, TYPE = host.TYPE, CR_ACTIVE = host.CR_ACTIVE, description = host.VWEPICORCATEGORY.description }).ToArray() }; return Json(result, JsonRequestBehavior.AllowGet); } } catch (Exception ex) { //send the error email ExceptionPolicy.HandleException(ex, "Exception Policy"); } //have to return something if there is an issue return Json(""); } As you can see the description field is a part of the related table("VWEPICORCATEGORY") and the order by is targeted at LKTYPE. I am trying to figure out how exactly one goes about sorting that particular field or maybe even a better way to implement this grid using multiple tables and it's sorting functionality. Thanks in advance, Billy

    Read the article

  • Is this a good approach to address double-base64-encoding?

    - by Freiheit
    My software understands attachments, like PNGs attached to user records. These attachments are usually sent in from outside sources as a Base64 encoded string. The database stores whatever data it is given, Base64 encoded or not. When I serve up the attachment for download I do this: if (Base64.isBase64(data)) { data = Base64.decodeBase64(data); } There is a potential for data that is double encoded. For instance the sender of a message had base64 encoded data, then encoded it again when building the message to send to me. I think the following code would address that circumstance: while (Base64.isBase64(data)) { data = Base64.decodeBase64(data); } So if data is encoded multiple times, it would be decoded until its in its 'raw' state and then served up for download. Is this approach an acceptable way to address that problem? Ideally some sort of checking could happen at the edge when I receive attachment data, but that will take more time. This looping seems to be a faster way to do it. The 'Base64' library is Apache Commons: http://commons.apache.org/codec/apidocs/org/apache/commons/codec/binary/Base64.html I trust it to properly identify Base64 encoded data.

    Read the article

  • Best way to change jqGrid rowNum from ALL to -1 before pass to a web service

    - by Billyhole
    I'm looking to find the best way to allow users to choose to show ALL records in a jqGrid. I know that a -1 value passed for the rows parameter denotes ALL, but I want the word "ALL" not a -1 to appear in the rowList select element, ie. rowList: [15, 50, 100, 'ALL']. I'm passing the grid request to a web service which accepts an int for "rows", and I'm trying find how and when I should change the user selected value of "ALL" to a -1 before it gets sent to the web service. Below is my cleaned up grid code. I tried some various code blocks before my $.ajax in the datatype function. But most attempts just seemed like I have to be doing this the most convoluted way I possibly could. For example, datatype: function(postdata) { if ($("#gridTableAssets").jqGrid('getGridParam', 'rowNum') == 'ALL') { $("#gridTableAssets").appendPostData({ "rows": -1, "page": 1 }); } $.ajax({... But doing that seemed to cause the actual "page" GridParam to be nulled out on subsequent grid actions, forcing me handle that in other places. There just seems like this is something that would be frequently done out there and have clean way of doing it. Cleaned grid code: $("#gridTableAssets").jqGrid({ datatype: function(postdata) { $.ajax({ url: "/Service/Repository.asmx/GetAssets", data: JSON.stringify(postdata), type: 'POST', contentType: "application/json; charset=utf-8", error: function(XMLHttpRequest, textStatus, errorThrown) { alert('error'); }, success: function(msg) { var assetsGrid = $("#gridTableAssets")[0]; assetsGrid.addJSONData(JSON.parse(msg)); ... } }); }, ... pager: $('#pagerAssets'), rowNum: 15, rowList: [15, 50, 100, 'ALL'], ... onPaging: function(index, colindex, sortorder) { SessionKeepAlive(); } }); And here is the web service [WebMethod] public string GetAssetsOfAssetStructure(bool _search, int rows, int page, string sidx, string sord, string filters)

    Read the article

  • Pros and cons of making database IDs consistent and "readable"

    - by gmale
    Question Is it a good rule of thumb for database IDs to be "meaningless?" Conversely, are there significant benefits from having IDs structured in a way where they can be recognized at a glance? What are the pros and cons? Background I just had a debate with my coworkers about the consistency of the IDs in our database. We have a data-driven application that leverages spring so that we rarely ever have to change code. That means, if there's a problem, a data change is usually the solution. My argument was that by making IDs consistent and readable, we save ourselves significant time and headaches, long term. Once the IDs are set, they don't have to change often and if done right, future changes won't be difficult. My coworkers position was that IDs should never matter. Encoding information into the ID violates DB design policies and keeping them orderly requires extra work that, "we don't have time for." I can't find anything online to support either position. So I'm turning to all the gurus here at SA! Example Imagine this simplified list of database records representing food in a grocery store, the first set represents data that has meaning encoded in the IDs, while the second does not: ID's with meaning: Type 1 Fruit 2 Veggie Product 101 Apple 102 Banana 103 Orange 201 Lettuce 202 Onion 203 Carrot Location 41 Aisle four top shelf 42 Aisle four bottom shelf 51 Aisle five top shelf 52 Aisle five bottom shelf ProductLocation 10141 Apple on aisle four top shelf 10241 Banana on aisle four top shelf //just by reading the ids, it's easy to recongnize that these are both Fruit on Aisle 4 ID's without meaning: Type 1 Fruit 2 Veggie Product 1 Apple 2 Banana 3 Orange 4 Lettuce 5 Onion 6 Carrot Location 1 Aisle four top shelf 2 Aisle four bottom shelf 3 Aisle five top shelf 4 Aisle five bottom shelf ProductLocation 1 Apple on aisle four top shelf 2 Banana on aisle four top shelf //given the IDs, it's harder to see that these are both fruit on aisle 4 Summary What are the pros and cons of keeping IDs readable and consistent? Which approach do you generally prefer and why? Is there an accepted industry best-practice?

    Read the article

  • choosing row (from group) with max value in a SQL Server Database

    - by sriehl
    I have a large database and am putting together a report of the data. I have aggregated and summed the data from many tables to get two tables that look like the following. id | code | value id | code | value 13 | AA | 0.5 13 | AC | 2.0 13 | AB | 1.0 14 | AB | 1.5 14 | AA | 2.0 13 | AA | 0.5 15 | AB | 0.5 15 | AB | 3.0 15 | AD | 1.5 15 | AA | 1.0 I need to get a list of id's, with the code (sumed from both tables) with the largest value. 13 | AC 14 | AA 15 | AB There are 4-6 thousand records and it is not possible to change the original tables. I'm not too worried about performance as I only need to run it a few times a year. edit: Let me see if I can explain a bit more clearly, imagine the id is the customer id, the code is who they ordered from and the value is how much they spent there. I need a list of the all the customer id's and the store that customer spent the most money at (and if they spent the same at two different stores, put a value such as 'ZZ' in for the store name).

    Read the article

  • Simple Select Statement on MySQL Database Hanging

    - by AlishahNovin
    I have a very simple sql select statement on a very large table, that is non-normalized. (Not my design at all, I'm just trying to optimize while simultaneously trying to convince the owners of a redesign) Basically, the statement is like this: SELECT FirstName, LastName, FullName, State FROM Activity Where (FirstName=@name OR LastName=@name OR FullName=@name) AND State=@state; Now, FirstName, LastName, FullName and State are all indexed as BTrees, but without prefix - the whole column is indexed. State column is a 2 letter state code. What I'm finding is this: When @name = 'John Smith', and @state = '%' the search is really fast and yields results immediately. When @name = 'John Smith', and @state = 'FL' the search takes 5 minutes (and usually this means the web service times out...) When I remove the FirstName and LastName comparisons, and only use the FullName and State, both cases above work very quickly. When I replace FirstName, LastName, FullName, and State searches, but use LIKE for each search, it works fast for @name='John Smith%' and @state='%', but slow for @name='John Smith%' and @state='FL' When I search against 'John Sm%' and @state='FL' the search finds results immediately When I search against 'John Smi%' and @state='FL' the search takes 5 minutes. Now, just to reiterate - the table is not normalized. The John Smith appears many many times, as do many other users, because there is no reference to some form of users/people table. I'm not sure how many times a single user may appear, but the table itself has 90 Million records. Again, not my design... What I'm wondering is - though there are many many problems with this design, what is causing this specific problem. My guess is that the index trees are just too large that it just takes a very long time traversing the them. (FirstName, LastName, FullName) Anyway, I appreciate anyone's help with this. Like I said, I'm working on convincing them of a redesign, but in the meantime, if I someone could help me figure out what the exact problem is, that'd be fantastic.

    Read the article

  • Automatic Adjusting Range Table

    - by Bradford
    I have a table with a start date range, an end date range, and a few other additional columns. On input of a new record, I want to automatically adjust any overlapping date ranges (shrinking them to allow for the new input). I also want to ensure that no overlapping records can accidentally be inserted into this table. I'm using Oracle and Java for my application code. How should I enforce the prevention of overlapping date ranges and also allow for automatically adjusting overlapping ranges? Should I create an AFTER INSERT trigger, with a dbms_lock to serialize access, to prevent the overlapping data. Then in Java, apply the logic to auto adjust everything? Or should that part be in PL/SQL in stored procedure call? This is something that we need for a couple other tables so it'd be nice to abstract. If anyone has something like this already written, please share :) I did find this reference: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:474221407101 Here's an example of how each of the 4 overlapping cases should be handled for adjustment on insert: = Example 1 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (20, 50, 'A') Gives (0, 10, 'X') **(20, 50, 'A') **(51, 100, 'Z') (200, 500, 'Y') = Example 2 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (40, 80, 'A') Gives (0, 10, 'X') **(30, 39, 'Z') **(40, 80, 'A') **(81, 100, 'Z') (200, 500, 'Y') = Example 3 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (50, 120, 'A') Gives (0, 10, 'X') **(30, 49, 'Z') **(50, 120, 'A') (200, 500, 'Y') = Example 4 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (20, 120, 'A') Gives (0, 10, 'X') **(20, 120, 'A') (200, 500, 'Y') The algorithm is as follows: given range = g; input range = i; output range set = o if i.start <= g.start if i.end >= g.end o_1 = i else o_1 = i o_2 = (o.end + 1, g.end) else if i.end >= g.end o_1 = (g.start, i.start - 1) o_2 = i else o_1 = (g.start, i.start - 1) o_2 = i o_3 = (i.end + 1, i.end)

    Read the article

  • C# Entity FrameWork MySQL Slow Queries Count()

    - by Matthew M.
    Hello, I'm having a serious issue with MySQL and Entity Framework 4.0. I have dropped a Table onto the EF Designer surface, and everything seems OK. However, when I perform a query in the following fashion: using(entityContext dc = new entityContext()) { int numRows = dc.myTable.Count(); } The query that is generated looks something like this: SELECT `GroupBy1`.`A1` AS `C1` FROM (SELECT Count(1) AS `A1` FROM (SELECT `pricing table`.`a`, `pricing table`.`b`, `pricing table`.`c`, `pricing table`.`d`, `pricing table`.`e`, `pricing table`.`f`, `pricing table`.`g`, `pricing table`.`h`, `pricing table`.`i` FROM `pricing table` AS `pricing table`) AS `Extent1`) AS `GroupBy1` As should be evident, this is an excruciatingly unoptimized query. It is selecting every single row! This is not optimal, nor is it even possible for me to use MySQL + EF at this point. I have tried both the MySQL 6.3.1 [that was fun to install] and DevArt's dotConnect for MySQL and both produce the same results. This table has 1.5 million records.. and takes 6-11s to execute! What am I doing wrong ? Is there any way to optimize this [and other queries] to produce sane code like: SELECT COUNT(*) FROM table ? Generating the same query using SQLServer takes virtually no time and produces sane code. Help! Thanks! Matthew

    Read the article

  • Mysql Powerdns issue when adding a new CName record

    - by Roland
    I'm trying to add a new CNAME record in PowerDNS to the mysql database, but my website does not want to show. When adding it in via Zone Admin it works, but as soon as I add the record as below it just does not want to work. Am I doing something wrong here. I checked if my record looks exactly the same in the DB as the record added with PowerDNS and it does. $type = 'CNAME'; //Adding the subdomain to the DNS database $sql = "insert into records " . "(domain_id, name,type,content,ttl,prio,change_date) values("; $sql .= $domain_id . ",'"; $sql .= trim($subdomain).".". trim($domain) . "','"; $sql .= trim($type) . "','"; $sql .= trim($domain) . "',"; $sql .= "3600,0,'".time()."')";

    Read the article

  • SQL CE: Limiting rows returned in the query

    - by Diakonia7
    In SQL Compact Edition 3.5 , note that it is the Compact Edition I am talking about- Is there a way to limit the amount of rows to only 2? Something like using LIMIT or TOP. I really don't want to use anything with a SqlCEDataReader, or SqlCEResultSet. I want to do all the limiting in the query. Is this possible now? I have looked around and it doesn't seem so. EDIT- In response to Dave Swersky's request for data and using Min()/Max() on some columns as a means to get the top 2 lines, here is some sample (sterilized) data: Line Site Function Status 1010 Las Vegas new 4 1020 DC send 1 1030 Portland copy 1 1040 SF copy 1 1050 Portland copy 1 1060 DC send 1 *There are more columns than this but these are the significant ones. Sorry for the lack of intuitive data (but the actual data is even less intuitive!), but for security i need to change the data. So- i need to determine: what site the record was at in the preceding line to determine where it needs to be picked up. The site on any given line (except the first line with function = 'new') corresponds to where the item is going next. So simply grabbing that site off the same line wont tell me where it came from. The status will always be 1 or 4. The 4 corresponds to a where it has been delivered already and so i dont want to include those records in the result. But it might be useful in getting the pickup site. For this table of data i want the query to return the site corresponding to the line just above the first line with status 1. So- for this it would be Las Vegas.

    Read the article

  • PHP framework question

    - by iconiK
    I'm currently working on a browser-based MMO and have chosen the LAMP stack because of the extremely low cost to start with in production (versus Windows + IIS + ASP.NET/C# + SQL Server, even though I have MSDN Universal). However I will need a PHP framework for this as it's no easy task. I am not restricted by anything other than the ability to run on Linux, as I will use a dedicated cloud hosting solution (and a VMWare image for development) and can configure it as needed. In no specific order: It has to be easily scalable; this is crucial. If the game becomes a steady success it will eventually outgrow the server beyond what the host provides and would have to be moved to several load-balanced servers. It is crucial that this can be done with minimum effort. I do know this might require following strict conventions, so if you know of any for your suggested framework please explain what would be needed. It has to provide modules for all the core tasks: authentication, ACL, database access, MVC, and so on. One or two missing modules are fine, as long as they can easily be written and integrated. It should support internationalization. I think there is no excuse for any web framework not to provide means of translating the application and switching between languages without a lot of effort from the programmer. Must have very good community support and preferably commercial support as well. Yes, I do know QCodo/QCubed is so nice, but it is not mature enough for this task. Smooth AJAX support is required. Whether the framework comes with AJAX-capable widgets or has an easy way of adding AJAX is not relevant, as long as AJAX is easily doable. I plan to use jQuery + Dojo or one of them alone - not exactly sure. Auto-magically doing stuff when it improves readability and relieves a lot of effort would be especially nice if it is generally reliable and does not interfere with other requirements. This seems to be the case of CakePHP. I have read a lot of comparisons and I know it's a really hot debate. The general answer is "try and see for yourself what suits you". However, I can't say it is easy for this task and I'm calling for your experience with building applications with similar requirements. So far I'm tied up between Zend and CakePHP by the general criteria, however, all well-known frameworks offer the same functionality in some way or another with different approaches each with it's own advantages and disadvantages. Edits: I am kinda new to MVC, however, I am willing to learn it and I don't care if a framework is easier for those new to MVC. I have lots of time to learn MVC and any other architectures (or whatever they're called) you recommend. I will use Zend as a utility "framework", even though it's just a collection of libraries (some good ones though, as I have been told). Current PHP contenders are: CakePHP, Kohana, Zend alone.

    Read the article

  • Efficient alternative to merge() when building dataframe from json files with R?

    - by Bryan
    I have written the following code which works, but is painfully slow once I start executing it over thousands of records: require("RJSONIO") people_data <- data.frame(person_id=numeric(0)) json_data <- fromJSON(json_file) n_people <- length(json_data) for(lender in 1:n_people) { person_dataframe <- as.data.frame(t(unlist(json_data[[person]]))) people_data <- merge(people_data, person_dataframe, all=TRUE) } output_file <- paste("people_data",".csv") write.csv(people_data, file=output_file) I am attempting to build a unified data table from a series of json-formated files. The fromJSON() function reads in the data as lists of lists. Each element of the list is a person, which then contains a list of the attributes for that person. For example: [[1]] person_id name gender hair_color [[2]] person_id name location gender height [[...]] structure(list(person_id = "Amy123", name = "Amy", gender = "F", hair_color = "brown"), .Names = c("person_id", "name", "gender", "hair_color")) structure(list(person_id = "matt53", name = "Matt", location = structure(c(47231, "IN"), .Names = c("zip_code", "state")), gender = "M", height = 172), .Names = c("person_id", "name", "location", "gender", "height")) The end result of the code above is matrix where the columns are every person-attribute that appears in the structure above, and the rows are the relevant values for each person. As you can see though, some data is missing for some of the people, so I need to ensure those show up as NA and make sure things end up in the right columns. Further, location itself is a vector with two components: state and zip_code, meaning it needs to be flattened to location.state and location.zip_code before it can be merged with another person record; this is what I use unlist() for. I then keep the running master table in people_data. The above code works, but do you know of a more efficient way to accomplish what I'm trying to do? It appears the merge() is slowing this to a crawl... I have hundreds of files with hundreds of people in each file. Thanks! Bryan

    Read the article

  • Javascript Error with DataTable jQuery plugin

    - by stevoyoung
    I am getting a JS error and what to know what it means and how to solve it. (JS noob here) Error: "tId is not defined" Line of JS with error: "if (s[i].sInstance = tId) { " More Information I am using the Data Table (http://datatables.net) jQuery plugin. I have a two tables with a class of "dataTable" loaded on a page (inside of jQuery UI tabs). The tables render as expected but I get the error above in Firebug. Attached is my Data Table config file... $(document).ready(function() { //Take from: http://datatables.net/forums/comments.php?DiscussionID=1507 // before creating a table, make sure it is not already created. // And if it is, then remove old version before new one is created var currTable = $(".dataTable"); if (currTable) { // contains the dataTables master records var s = $(document).dataTableSettings; if (s != 'undefined') { var len = s.length; for (var i=0; i < len; i++) { // if already exists, remove from the array if (s[i].sInstance = tId) { s.splice(i,1); } } } } oTable = $('.dataTable').dataTable({ "bJQueryUI": true, "sPaginationType": "full_numbers", "bFilter": false }); }); What does the error mean and how do I resolve it?

    Read the article

  • Regarding Toplink Fetching Policy

    - by Chandu
    Hi, I'm working for a Swing Project and the technologies used are netbeans with Toplink essentials, mysql. The Problem I'm facing is the entity object dosn't get updated after insertions take place while calling a getter collection of the foreign key property. Ex: I have 2 tables Table1,Table2. I have sno column, id column as a primary key in Table1 & is Foreign Key in Table2. Through find method I just get the particular sno object(existed in table 1) set some values persisted to table2 & committed the transaction. When I select the same sno object through find method & gets its collection from Table2 through the getTable2Collection() of the bean(as it is already created in bean by toplink essential) I'm unable to get the latest added record except that all other records of it are displayed. After I close the application & opening it then the new record gets reflected while calling the same sno through the above process. I came to know that this is a kind of lazy fetching and there should be some way of fetch policy to be changed to make the entity object get updated with the changes. So Please help me in this regard. Regards, Chandu

    Read the article

  • Linq: the linked objects are null, why?

    - by user46503
    Hello, I have several linked tables (entities). I'm trying to get the entities using the following linq: ObjectQuery<Location> locations = context.Location; ObjectQuery<ProductPrice> productPrice = context.ProductPrice; ObjectQuery<Product> products = context.Product; IQueryable<ProductPrice> res1 = from pp in productPrice join loc in locations on pp.Location equals loc join prod in products on pp.Product equals prod where prod.Title.ToLower().IndexOf(Word.ToLower()) > -1 select pp; This query returns 2 records, ProductPrice objects that have linked object Location and Product but they are null and I cannot understand why. If I try to fill them in the linq as below: res = from pp in productPrice join loc in locations on pp.Location equals loc join prod in products on pp.Product equals prod where prod.Title.ToLower().IndexOf(Word.ToLower()) > -1 select new ProductPrice { ProductPriceId = pp.ProductPriceId, Product = prod }; I have the exception "The entity or complex type 'PBExplorerData.ProductPrice' cannot be constructed in a LINQ to Entities query" Could someone please explain me what happens and what I need to do? Thanks

    Read the article

  • Rails 3 memory issue

    - by Erik
    Hello! I'm developing a new site based on Ruby on Rails 3 beta. I knew this might be a bad idea considering it's just beta, but I still thought it might work. Now though I'm having HUGE problems with Rails consuming huge ammounts of memory. For my application today it consumes about 10 mb per request and it doesn't seem to release it either. So I thought this might be because of bloat in my application and thus I created a test app just to compare. For my test app I just generated a model with a scaffold and then created about 20 records on this model. I then went to the index page and hit refresh and I could immediately see memory taking off! Less than my app but still about 1-3 mb per request. I'm working in OSX Leopard, with Ruby 1.8.7, Rails 3.0.0.beta and a SQLLite db for development. Does anyone recognize my problem? I would really appreciate some help here. :/ Thanks!

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >