Search Results

Search found 7957 results on 319 pages for 'production databases'.

Page 266/319 | < Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >

  • How do you deal with denormalization / secondary indexes in database sharding?

    - by Continuation
    Say I have a "message" table with 2 secondary indexes: "recipient_id" "sender_id" I want to shard the "message" table by "recipient_id". That way to retrieve all messages sent to a certain recipient I only need to query one shard. But at the same time, I want to be able to make a query that ask for all messages sent by a certain sender. Now I don't want to send that query to every single shard of the "message" table. One way to do this is to duplicate the data and have a "message_by_sender" table sharded by "sender_id". The problem with that approach is that every time a message has been sent, I need to insert the message into both "message" and "message_by_sender" tables. But what if after inserting into "message" the insertion into "message_by_sender" fail? In that case the message exists in "message" but not in "message_by_sender". How do I make sure that if a message exists in "message" then it also exists in "message_by_sender" without resorting to 2 phase commit? This must be a very common issue for anyone who shards their databases. How do you deal woth it?

    Read the article

  • What's the largest (most complex) PHP algorithm ever implemented in a single monolithic PHP script?

    - by Alex R
    I'm working on a tool which converts PHP code to Scala. As one of the finishing touches, I'm in need of a really good (er, somewhat biased) benchmark. By dumb luck my first benchmark attempt was with some code which uses bcmath extensively, which unfortunately is 1000x slower in Java, making the Scala code 22x slower overall than the original PHP. So I'm looking for some meaningful PHP benchmark with the following characteristics: The source needs to be in a single file. I need it to be simple to setup - no databases, hard-to-find input files, etc. Simple text input and output preferred. It should not use features that are slow in Java (BigInteger, trigonometric functions, etc). It should not use exoteric or dynamic PHP functions (e.g. no "eval" or "variable vars"). It should not over-rely on built-in libraries, e.g. MD5, crypt, etc. It should not be I/O bound. A CPU-bound memory-hungry algorithm is preferred. Basically, intensive OO operations, integer and string manipulation, recursion, etc would be great. Thanks

    Read the article

  • iPhone and Core Data: how to retain user-entered data between updates?

    - by Shaggy Frog
    Consider an iPhone application that is a catalogue of animals. The application should allow the user to add custom information for each animal -- let's say a rating (on a scale of 1 to 5), as well as some notes they can enter in about the animal. However, the user won't be able to modify the animal data itself. Assume that when the application gets updated, it should be easy for the (static) catalogue part to change, but we'd like the (dynamic) custom user information part to be retained between updates, so the user doesn't lose any of their custom information. We'd probably want to use Core Data to build this app. Let's also say that we have a previous process already in place to read in animal data to pre-populate the backing (SQLite) store that Core Data uses. We can embed this database file into the application bundle itself, since it doesn't get modified. When a user downloads an update to the application, the new version will include the latest (static) animal catalogue database, so we don't ever have to worry about it being out of date. But, now the tricky part: how do we store the (dynamic) user custom data in a sound manner? My first thought is that the (dynamic) database should be stored in the Documents directory for the app, so application updates don't clobber the existing data. Am I correct? My second thought is that since the (dynamic) user custom data database is not in the same store as the (static) animal catalogue, we can't naively make a relationship between the Rating and the Notes entities (in one database) and the Animal entity (in the other database). In this case, I would imagine one solution would be to have an "animalName" string property in the Rating/Notes entity, and match it up at runtime. Is this the best way to do it, or is there a way to "sync" two different databases in Core Data?

    Read the article

  • Does overloading Grails static 'mapping' property to bolt on database objects violate DRY?

    - by mikesalera
    Does Grails static 'mapping' property in Domain classes violate DRY? Let's take a look at the canonical domain class: class Book {      Long id      String title      String isbn      Date published      Author author      static mapping = {             id generator:'hilo', params:[table:'hi_value',column:'next_value',max_lo:100]      } } or: class Book { ...         static mapping = {             id( generator:'sequence', params:[sequence_name: "book_seq"] )     } } And let us say, continuing this thought, that I have my Grails application working with HSQLDB or MySQL, but the IT department says I must use a commercial software package (written by a large corp in Redwood Shores, Calif.). Does this change make my web application nobbled in development and test environments? MySQL supports autoincrement on a primary key column but does support sequence objects, for example. Is there a cleaner way to implement this sort of 'only when in production mode' without loading up the domain class?

    Read the article

  • Setting up a multi-site CMS, collecting thoughts about the DB schema

    - by Ben Fransen
    Hello all, I'm collecting some thoughts about creating a multisite CMS. In my opinion there are two major approaches. All data is stored into 1 database, giving me the advantage of single point of updates; Seperated databases, so each client has its own database. Giving me the advantage to measure bandwith. Option 1 gives me the disadvantage of measuring bandwith while option is giving me the disadvantage of a single point of update structure. Are there any generic approaches for creating a sort of update system? So my clients can download a small package (maybe a zip with a conf file to tell the updatescript where to put all the files and how to extend the database??) Do you guys have some thougths about the best solution for a situation like this? I have my own webserver, full access to all resources and I'm developing in PHP with MySQL as DBMS. I hope to hear from you and I surely appreciate any effort you make to help me further! Greets from Holland, Ben Fransen

    Read the article

  • A business Case for Enterprise Python

    - by Srirangan
    Hi all, This will not be a "programming" question but more technology / platform related question. I'm trying to figure out whether Python can be a suitable Java alternative for enterprise / web applications. Which are the ideal cases where you would prefer to use Python instead of Java? How would a typical Python web application (databases/sessions/concurrency) perform as compared to a typical Java application? How do specific Python frameworks square up against Java based frameworks (Spring, SEAM, Grails etc.)? For businesses, switching from the Java infrastructure to a Python infrastructure .. is it too hard/expensive/resource intensive/not viable? Also shed some light on the business case for providing a Python + Google AppEngine based solution to the end customer. Will it be cost effective in an typical scenario? Sorry if I am asking too wide a question, I would have liked to keep it specific, but I need your help to evaluate Python as a whole from the perspectives of the programmers, service providing company and end business customer. For an SME, a Python/GoogleAppEngine based technology stack is a clear scalable and affordable platform. But what about a large MNC that already has a lot invested in Java. Thank you so much. I am researching this myself and will gladly share my conclusions here! Thank you, Srirangan

    Read the article

  • Execute SQL on CSV files via JDBC

    - by Markos Fragkakis
    Dear all, I need to apply an SQL query to CSV files (comma-separated text files). My SQL is predefined from another tool, and is not eligible to change. It may contain embedded selects and table aliases in the FROM part. For my task I have found two open-source (this is a project requirement) libraries that provide JDBC drivers: CsvJdbc XlSQL JBoss Teiid Create an Apache Derby DB, load all CSVs as tables and execute the query. These are the problems I encountered: it does not accept the syntax of the SQL (it uses internal selects and table aliases). Furthermore, it has not been maintained since 2004. I could not get it to work, as it has as dependency a SAX Parser that causes exception when parsing other documents. Similarly, no change since 2004. Have not checked if it supports the syntax, but seems like an overhead. It needs several entities defines (Virtual Databases, Bindings). From the mailing list they told me that last release supports runtime creation of required objects. Has anyone used it for such simple task (normally it can connect to several types of data, like CSV, XML or other DBS and create a virtual, unified one)? Can this even be done easily? From the 4 things I considered/tried, only 3 and 4 seem to me viable. Any advice on these, or any other way in which I can query my CSV files? Cheers

    Read the article

  • Parsing a blackberry .ipd file

    - by galaxywatcher
    I recently lost my Blackberry. When I discovered it was gone very shortly afterwards and called it, the sim card had already been removed. I ain't seeing that Blackberry again. Ok. I am out $300, but at least my data is backed up. I had an older working Blackberry fortunately and I got a new sim card and proceeded to restore my data using Blackberry Desktop Manager. 7000+ emails, hundreds of autotext entries, sms messages, calendar events, all backing up. Looking good. Lo and behold! My Address Book contacts refuse to back up? I try advanced, and it is greyed out as an option to restore. Far more frustrating than losing my bberry in the first place is wrangling with software that defies human logic. Ok, now I guess I will have to enter all 327 names by hand. That is, if I can read the .ipd file. I have tried the free version of ABC Amber Blackberry editor, but when I open the .ipd file, the contacts just do not show up. I am beginning to feel like the gods are conspiring against me. Then I found this: http://jabide.com/2009/03/parse-blackberry-ipd-files/ He posted a perl script that claims to extract the files. I copied and pasted the code and it did list all the different databases in my .ipd file, I was elated that a cool solution like this was published. I followed the instructions and garbled data with some discernible ascii was sent to standard output unlike a .csv file like he said it would. This is enough to make a grown man cry. Does anyone out there have a solution to extract my address book contacts from an .ipd file?

    Read the article

  • NMock2.0 - how to stub a non interface call?

    - by dferraro
    Hello, I have a class API which has full code coverage and uses DI to mock out all the logic in the main class function (Job.Run) which does all the work. I found a bug in production where we werent doing some validation on one of the data input fields. So, I added a stub function called ValidateFoo()... Wrote a unit test against this function to Expect a JobFailedException, ran the test - it failed obviously because that function was empty. I added the validation logic, and now the test passes. Great, now we know the validation works. Problem is - how do I write the test to make sure that ValidateFoo() is actually called inside Job.Run()? ValidateFoo() is a private method of the Job class - so it's not an interface... Is there anyway to do this with NMock2.0? I know TypeMock supports fakes of non interface types. But changing mock libs right now is not an option. At this point if NMock can't support it, I will simply just add the ValidateFoo() call to the Run() method and test things manually - which obviously I'd prefer not to do considering my Job.Run() method has 100% coverage right now. Any Advice? Thanks very much it is appreciated. EDIT: the other option I have in mind is to just create an integration test for my Job.Run functionality (injecting to it true implementations of the composite objects instead of mocks). I will give it a bad input value for that field and then validate that the job failed. This works and covers my test - but it's not really a unit test but instead an integration test that tests one unit of functionality.... hmm.. EDIT2: IS there any way to do tihs? Anyone have ideas? Maybe TypeMock - or a better design?

    Read the article

  • Backing up my locally hosted rails apps in preparation for OS upgrade

    - by stephen murdoch
    I have some apps running on Heroku. I will be upgrading my OS in two weeks. The last time I upgraded though (6 months ago) I ran into some problems. Here's what I did: copied all my rails apps onto DVD upgraded OS transferred rails apps from DVD to new OS Then, after setting up new SSH-keys I tried to push to some of my heroku apps and, whilst I can't remember the exact error message off-hand, it more or less amounted to "fatal exception the remote end hung up" So I know that I'm doing something wrong here. First of all, is there any need for me to be putting my heroku hosted rails apps onto DVD? Would I be better just pulling all my apps from their heroku repos once I've done the upgrade? What do others do here? The reason I stuck them on DVD is because I tend to push a specific production branch to Heroku and sometimes omit large development files from it... Secondly, was this problem caused by SSH keys? Should I have backed up the old keys and transferred them from my old OS to my new one too, or is Heroku perfectly happy to let you change OS's like that? My solution in the end was to just create new heroku apps and reassign the custom domain names in heroku add-ons menu... I never actually though of pulling from the heroku repos as I tend to push a specific branch to heroku and that branch doesn't always have all the development files in it... I realise that the error message I mentioned doesn't particularly help anyone but I didn't think to remember it 6 months ago. Any advice would be appreciated PS - when I say upgrade, I mean full install of the new version with full format of the HDD.

    Read the article

  • Are there size limitations to the .NET Assembly format?

    - by McKAMEY
    We ran into an interesting issue that I've not experienced before. We have a large scale production ASP.NET 3.5 SP1 Web App Project in Visual Studio 2008 SP1 which gets compiled and deployed using a Website Deployment Project. Everything has worked fine for the last year, until after a check-in yesterday the app started critically failing with BadImageFormatException. The check-in in question doesn't change anything particularly special and the errors are coming from areas of the app not even changed. Using Reflector we inspected the offending methods to find that there were garbage strings in the code (which .NET Reflector humorously interpreted as Chinese characters). We have consistently reproduced this on several machines so it does not appear to be hardware related. Further inspection showed that those garbage strings did not exist in the Assemblies used as inputs to aspnet_merge.exe during deployment. aspnet_merge.exe / Web Deployment Project Output Assemblies Properties: Merge all outputs to a single assembly Merge each individual folder output to its own assembly Merge all pages and control outputs to a single assembly Create a separate assembly for each page and control output In the web deployment project properties if we set the merge options to the first option ("Merge all outputs to a single assembly") we experience the issue, yet all of the other options work perfectly! My question: does anyone know why this is happening? Is there a size-limit to aspnet_merge.exe's capabilities (the resulting merged DLL is around 19.3 MB)? Are there any other known issues with merging the output of WAPs? I would love it if any Assembly format / aspnet_merge.exe gurus know about any such limitations like this. Seems to me like a 25MB Assembly, while big, isn't outrageous.

    Read the article

  • Why can't perfmon see instances of my custom performance counter?

    - by spoulson
    I'm creating some custom performance counters for an application. I wrote a simple C# tool to create the categories and counters. For example, the code snippet below is basically what I'm running. Then, I run a separate app that endlessly refreshes the raw value of the counter. While that runs, the counter and dummy instance are seen locally in perfmon. The problem I'm having is that the monitoring system we use can't see the instances in the multi-instance counter I've created when viewing remotely from another server. When using perfmon to browse the counters, I can see the category and counters, but the instances box is grayed out and I can't even select "All instances", nor can I click "Add". Using other access methods, like [typeperf][1] exhibit similar issues. I'm not sure if this is a server or code issue. This is only reproducible in the production environment where I need it. On my desktop and development servers, it works great. I'm a local admin on all servers. CounterCreationDataCollection collection = new CounterCreationDataCollection(); var category_name = "My Application"; var counter_name = "My counter name"; CounterCreationData ccd = new CounterCreationData(); ccd.CounterType = PerformanceCounterType.RateOfCountsPerSecond64; ccd.CounterName = counter_name; ccd.CounterHelp = counter_name; collection.Add(ccd); PerformanceCounterCategory.Create(category_name, category_name, PerformanceCounterCategoryType.MultiInstance, collection); Then, in a separate app, I run this to generate dummy instance data: var pc = new PerformanceCounter(category_name, counter_name, instance_name, false); while (true) { pc.RawValue = 0; Thread.Sleep(1000); }

    Read the article

  • Making one table equal to another without a delete *

    - by Joshua Atkins
    Hey, I know this is bit of a strange one but if anyone had any help that would be greatly appreciated. The scenario is that we have a production database at a remote site and a developer database in our local office. Developers make changes directly to the developer db and as part of the deployment process a C# application runs and produces a series of .sql scripts that we can execute on the remote side (essentially delete *, insert) but we are looking for something a bit more elaborate as the downtime from the delete * is unacceptable. This is all reference data that controls menu items, functionality etc of a major website. I have a sproc that essentially returns a diff of two tables. My thinking is that I can insert all the expected data in to a tmp table, execute the diff, and drop anything from the destination table that is not in the source and then upsert everything else. The question is that is there an easy way to do this without using a cursor? To illustrate the sproc returns a recordset structured like this: TableName Col1 Col2 Col3 Dest Src Anything in the recordset with TableName = Dest should be deleted (as it does not exist in src) and anything in Src should be upserted in to dest. I cannot think of a way to do this purely set based but my DB-fu is weak. Any help would be appreciated. Apologies if the explanation is sketchy; let me know if you need anymore details.

    Read the article

  • Database Error Handling: What if You have to Call Outside service and the Transaction Fails?

    - by Ngu Soon Hui
    We all know that we can always wrap our database call in transaction ( with or without a proper ORM), in a form like this: $con = Propel::getConnection(EventPeer::DATABASE_NAME); try { $con->begin(); // do your update, save, delete or whatever here. $con->commit(); } catch (PropelException $e) { $con->rollback(); throw $e; } This way would guarantee that if the transaction fails, the database is restored to the correct status. But the problem is that let's say when I do a transaction, in addition to that transaction, I need to update another database ( an example would be when I update an entry in a column in databaseA, another entry in a column in databaseB must be updated). How to handle this case? Let's say, this is my code, I have three databases that need to be updated ( dbA, dbB, dbc): $con = Propel::getConnection("dbA"); try { $con->begin(); // update to dbA // update to dbB //update to dbc $con->commit(); } catch (PropelException $e) { $con->rollback(); throw $e; } If dbc fails, I can rollback the dbA but I can't rollback dbb. I think this problem should be database independent. And since I am using ORM, this should be ORM independent as well. Update: Some of the database transactions are wrapped in ORM, some are using naked PDO, oledb ( or whatever bare minimum language provided database calls). So my solution has to take care this. Any idea?

    Read the article

  • Microsoft OLE DB Provider for SQL Server error '80040e14' Could not find stored procedure

    - by BBlake
    I am migrating a classic ASP web app to new servers. The database back end is migrating from SQL Server 2000 to SQL Server 2008, and the app is moving from Win2000 x86 to Win2003R2 x64. I am getting the above error on every single stored procedure call within the application. I have verified: Yes, the SQL user is set up, using correct username and password Yes, the SQL user has execute permissions on the stored procedures in the database Yes, I have updated the TypeLib references to the new UUID Yes, I have logged into the database via SSMS with the SQL user id and it can see and execute the stored procedures just fine in SSMS, but not from the web app. Yes, the SQL user has the database set as its default database. The most frustrating thing is it works fine on the DEV server, but not on the production server. I have gone through every IIS setting 5 or 6 times and the web app is set up precisely the same in both environments. The only difference is the database server name in the connection string (DEV vs prod) EDIT: I have also tried pointing the prod web box at the dev database server and get the same error so I'm fairly sure the issue isn't on the database side.

    Read the article

  • Tomcat Application Generating too many logs

    - by rohitgu
    Hi, I have an application which runs on tomcat 6.0.20 server on linux ubuntu server. It generates a huge amount of logs in the catalina.out folder, most of these are generated while using the application, but are not generated by the application. Some of the logs it generates are given below, Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester startElement FINE: startElement(,,mime-type) Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester startElement FINE: Pushing body text ' ' Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester startElement FINE: New match='web-app/mime-mapping/mime-type' Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester startElement FINE: Fire begin() for CallParamRule[paramIndex=1, attributeName=null, from stack=false] Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester characters FINE: characters(audio/x-mpeg) Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester endElement FINE: endElement(,,mime-type) Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester endElement FINE: match='web-app/mime-mapping/mime-type' Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester endElement FINE: bodyText='audio/x-mpeg' Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester endElement FINE: Fire body() for CallParamRule[paramIndex=1, attributeName=null, from stack=false] Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester endElement FINE: Popping body text ' How can I turn them off? This is very important, since this a production application. Regards, Rohit

    Read the article

  • DNS-Based Environment Determination

    - by zvolkov
    Found the following here. The questions is: where can I find more details on how exactly implement this on Windows? Any guide or how-to anybody? Or maybe you can provide your invaluable suggestions? Specifically, how do I make so that "all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com" (I'm a dev, not a DNS specialist, but our IT Support has refused to help on this:() Use DNS Based Environment Determination for your servers. Do this by initially splitting your top level domain into a number of sub domains depending on their function, and then creating DNS Service Names in each of the sub domains pointing to the relevant server for that service. Based on the list above we would then have: * clientdb.prod.example.com for Production * clientdb.perf.example.com for Performance Testing * clientdb.qa.example.com for QA * clientdb.dev.example.com for Development Servers then resolve entries in their relevant sub domain by function. That is, all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com. This allows you to have a single configuration entry for your client database hostname (clientdb) that would resolve correctly in all environments. This technique has the added advantage of still having global services defined in a common top level domain. Here's one related (but not equivalent) SO question: http://stackoverflow.com/questions/774490/dns-resolving-based-on-client-ip This seems to be related to Providing "split horizon" DNS service. Reading that, I see that I will probably need separate DNS Server for each environment. Is this true or does Windows support some form of "tagging" the records to be visible depending on the requestor's IP? Also, cross-posted on ServerFault

    Read the article

  • How to profile Doctrine in Zend Framework

    - by David Zapata
    Good day. I'm using Doctrine as ORM for my Zend Framework project. This is the first time I use it. I've followed the ZendCasts Doctrine chapters, and everything works for me, but I needed to perform some profiling; There is a Doctrine_Connection_Profiler class that should be used to profile the Doctrine Model internal queries, but I've tried to use it without success. I always get a "PDOException: You cannot serialize or unserialize PDOStatement instances" exception when I perform my Unit Tests. Here is a example: $conn = Doctrine_Manager::connection($doctrineConfig['dsn'], $dbconfname); ... if( APPLICATION_ENV != 'production'){ $obj_doctrine_profiler = new Doctrine_Connection_Profiler(); $conn->setListener($obj_doctrine_profiler); } All of my Unit Tests works if I comment/delete the $conn->setListener($obj_doctrine_profiler); line. This code block is located in my Bootstrap.php class; the weird thing is, the web application works just fine even with the mentioned code line. Thank you so much for your help. please excuse me if my english is not the best.

    Read the article

  • Handling dependencies with IoC that change within a single function call

    - by Jess
    We are trying to figure out how to setup Dependency Injection for situations where service classes can have different dependencies based on how they are used. In our specific case, we have a web app where 95% of the time the connection string is the same for the entire Request (this is a web application), but sometimes it can change. For example, we might have 2 classes with the following dependencies (simplified version - service actually has 4 dependencies): public LoginService (IUserRepository userRep) { } public UserRepository (IContext dbContext) { } In our IoC container, most of our dependencies are auto-wired except the Context for which I have something like this (not actual code, it's from memory ... this is StructureMap): x.ForRequestedType().Use() .WithCtorArg("connectionString").EqualTo(Session["ConnString"]); For 95% of our web application, this works perfectly. However, we have some admin-type functions that must operate across thousands of databases (one per client). Basically, we'd want to do this: public CreateUserList(IList<string> connStrings) { foreach (connString in connStrings) { //first create dependency graph using new connection string ???? //then call service method on new database _loginService.GetReportDataForAllUsers(); } } My question is: How do we create that new dependency graph for each time through the loop, while maintaining something that can easily be tested?

    Read the article

  • A copy of ApplicationController has been removed from the module tree but is still active

    - by Matchu
    Whenever two concurrent HTTP requests go to my Rails app, the second always returns the following error: A copy of ApplicationController has been removed from the module tree but is still active! From there it gives an unhelpful stack trace to the effect of "we went through the standard server stuff, ran your first before_filter on ApplicationController (and I checked; it's just whichever filter runs first)", then offers the following: /home/matchu/rails/torch/vendor/rails/activesupport/lib/active_support/dependencies.rb:414:in `load_missing_constant' /home/matchu/rails/torch/vendor/rails/activesupport/lib/active_support/dependencies.rb:96:in `const_missing' which I'm assuming is a generic response and doesn't really say much. Google seems to tell me that people developing Rails Engines will encounter this, but I don't do that. All I've done is upgrade my Rails app from 2.2 (2.1?) to 2.3. What are some possible causes for this error, and how can I go about tracking down what's really going on? I know this question is vague, so would any other information be helpful? More importantly: I tried doing a test run in a "production" environment just now, and the error doesn't seem to persist. Does this only affect development, then, and need I not worry too much?

    Read the article

  • A really smart rails helper needed

    - by Stefan Liebenberg
    In my rails application I have a helper function: def render_page( permalink ) page = Page.find_by_permalink( permalink ) content_tag( :h3, page.title ) + inline_render( page.body ) end If I called the page "home" with: <%= render_page :home %> and "home" page's body was: <h1>Home<h1/> bla bla <%= render_page :about %> <%= render_page :contact %> I would get "home" page, with "about" and "contact", it's nice and simple... right up to where someone goes and changes the "home" page's content to: <h1>Home<h1/> bla bla <%= render_page :home %> <%= render_page :about %> <%= render_page :contact %> which will result in a infinite loop ( a segment fault on webrick )... How would I change the helper function to something that won't fall into this trap? My first attempt was along the lines of: @@list = [] def render_page( permalink ) unless @@list.include?(permalink) @@list += [ permalink ] page = Page.find_by_permalink result = content_tag( :h3, page.title ) + inline_render( page.body ) @@list -= [ permalink ] return result else content_tag :b, "this page is already being rendered" end end which worked on my development environment, but bombed out in production... any suggestions? Thank You Stefan

    Read the article

  • How to avoid chaotic ASP.NET web application deployment?

    - by emzero
    Ok, so here's the thing. I'm developing an existing (it started being an ASP classic app, so you can imagine :P) web application under ASP.NET 4.0 and SQLServer 2005. We are 4 developers using local instances of SQL Server 2005 Express, having the source-code and the Visual Studio database project This webapp has several "universes" (that's how we call it). Every universe has its own database (currently on the same server) but they all share the same schema (tables, sprocs, etc) and the same source/site code. So manually deploying is really annoying, because I have to deploy the source code and then run the sql scripts manually on each database. I know that manual deploying can cause problems, so I'm looking for a way of automating it. We've recently created a Visual Studio Database Project to manage the schema and generate the diff-schema scripts with different targets. I don't have idea how to put the pieces together I would like to: Have a way to make a "sync" deploy to a target server (thanksfully I have full RDC access to the servers so I can install things if required). With "sync" deploy I mean that I don't want to fully deploy the whole application, because it has lots of files and I just want to deploy those new or changed. Generate diff-sql update scripts for every database target and combine it to just 1 script. For this I should have some list of the databases names somewhere. Copy the site files and executing the generated sql script in an easy and automated way. I've read about MSBuild, MS WebDeploy, NAnt, etc. But I don't really know where to start and I really want to get rid of this manual deploy. If there is a better and easier way of doing it than what I enumerated, I'll be pleased to read your option. I know this is not a very specific question but I've googled a lot about it and it seems I cannot figure out how to do it. I've never used any automation tool to deploy. Any help will be really appreciated, Thank you all, Regards

    Read the article

  • Testing fault tolerant code

    - by Robert
    I’m currently working on a server application were we have agreed to try and maintain a certain level of service. The level of service we want to guaranty is: if a request is accepted by the server and the server sends on an acknowledgement to the client we want to guaranty that the request will happen, even if the server crashes. As requests can be long running and the acknowledgement time needs be short we implement this by persisting the request, then sending an acknowledgement to the client, then carrying out the various actions to fulfill the request. As actions are carried out they too are persisted, so the server knows the state of a request on start up, and there’s also various reconciliation mechanisms with external systems to check the accuracy of our logs. This all seems to work fairly well, but we have difficult saying this with any conviction as we find it very difficult to test our fault tolerant code. So far we’ve come up with two strategies but neither is entirely satisfactory: Have an external process watch the server code and then try and kill it off at what the external process thinks is an appropriate point in the test Add code the application that will cause it to crash a certain know critical points My problem with the first strategy is the external process cannot know the exact state of the application, so we cannot be sure we’re hitting the most problematic points in the code. My problem with the second strategy, although it gives more control over were the fault takes, is I do not like have code to inject faults within my application, even with optional compilation etc. I fear it would be too easy to over look a fault injection point and have it slip into a production environment.

    Read the article

  • Rails 3 with Ruby 1.9.1 on Heroku

    - by stephen murdoch
    I've decided that I am going to man-up and start using Rails 3 from now on but the following note found here puts me off a bit: Note that Ruby 1.8.7 has marshaling bugs that crash both Rails 2.3.x and Rails 3.0.0. Ruby 1.9.1 outright segfaults on Rails 3.0.0, so if you want to use Rails 3 with 1.9.x, jump on 1.9.2 trunk for smooth sailing. I use Heroku for my deployment and as far as I am aware they do not plan to add 1.9.2 to the stack until it's stable (which might be in August) so I was thinking of doing it with 1.9.1. and seeing what happens. I know that there is a 3rd beta release now but the comments on the blog imply that it's still a little bit buggy. Is DHH inferring that you shouldn't touch Rails 3 at all if you are on 1.9.1? What are other Heroku-ists doing regarding Rails 3? Anyone using it for any production apps? I guess I'll only know once I've tried but any advice would be nice.

    Read the article

  • How to write to a varchar(max) column using ODBC

    - by andyjohnson
    Summary: I'm trying to write a text string to a column of type varchar(max) using ODBC and SQL Server 2005. It fails if the length of the string is greater than 8000. Help! I have some C++ code that uses ODBC (SQL Native Client) to write a text string to a table. If I change the column from, say, varchar(100) to varchar(max) and try to write a string with length greater than 8000, the write fails with the following error [Microsoft][ODBC SQL Server Driver]String data, right truncation So, can anyone advise me on if this can be done, and how? Some example (not production) code that shows what I'm trying to do: SQLHENV hEnv = NULL; SQLRETURN iError = SQLAllocEnv(&hEnv); HDBC hDbc = NULL; SQLAllocConnect(hEnv, &hDbc); const char* pszConnStr = "Driver={SQL Server};Server=127.0.0.1;Database=MyTestDB"; UCHAR szConnectOut[SQL_MAX_MESSAGE_LENGTH]; SWORD iConnectOutLen = 0; iError = SQLDriverConnect(hDbc, NULL, (unsigned char*)pszConnStr, SQL_NTS, szConnectOut, (SQL_MAX_MESSAGE_LENGTH-1), &iConnectOutLen, SQL_DRIVER_COMPLETE); HSTMT hStmt = NULL; iError = SQLAllocStmt(hDbc, &hStmt); const char* pszSQL = "INSERT INTO MyTestTable (LongStr) VALUES (?)"; iError = SQLPrepare(hStmt, (SQLCHAR*)pszSQL, SQL_NTS); char* pszBigString = AllocBigString(8001); iError = SQLSetParam(hStmt, 1, SQL_C_CHAR, SQL_VARCHAR, 0, 0, (SQLPOINTER)pszBigString, NULL); iError = SQLExecute(hStmt); // Returns SQL_ERROR if pszBigString len > 8000 The table MyTestTable contains a single colum defined as varchar(max). The function AllocBigString (not shown) creates a string of arbitrary length. I understand that previous versions of SQL Server had an 8000 character limit to varchars, but not why is this happening in SQL 2005? Thanks, Andy

    Read the article

< Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >