Search Results

Search found 62606 results on 2505 pages for 'sql files'.

Page 440/2505 | < Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >

  • Between-request Garbage Collection using Passenger

    - by raphaelcm
    We're using Rails 3.0.7 and REE 1.8.7. Long-term, we will be upgrading, but at the moment it's not feasible. Following the advice of several blog posts, we've been tuning our GC, and have settings that work pretty well. But we would really like to run GC outside of the request-response cycle. I've tried patching Passenger per this post, and using the code supplied in this SO question. In both cases, GC does indeed happen between requests. However, every time the between-request GC happens, I see a bunch of this: MONGODB [INFO] Connecting... MONGODB admin['$cmd'].find({:ismaster=>1}).limit(-1) MONGODB admin['$cmd'].find({:ismaster=>1}).limit(-1) MONGODB admin['$cmd'].find({:ismaster=>1}).limit(-1) Starting the New Relic Agent. Installed New Relic Browser Monitoring middleware SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES RefinerySetting Load (0.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` WHERE `refinery_settings`.`scoping` = 'pages' AND `refinery_settings`.`name` = 'use_marketable_urls' LIMIT 1 SQL (0.0ms) BEGIN RefinerySetting Load (0.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` WHERE `refinery_settings`.`id` = 1 LIMIT 1 AREL (0.0ms) UPDATE `refinery_settings` SET `value` = '--- \"false\"\n', `callback_proc_as_string` = NULL WHERE `refinery_settings`.`id` = 1 SQL (0.0ms) SHOW TABLES RefinerySetting Load (0.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` SQL (0.0ms) COMMIT SQL (0.0ms) SHOW TABLES RefinerySetting Load (4.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` WHERE `refinery_settings`.`scoping` IS NULL AND `refinery_settings`.`name` = 'user_image_sizes' LIMIT 1 SQL (0.0ms) BEGIN RefinerySetting Load (0.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` WHERE `refinery_settings`.`id` = 17 LIMIT 1 AREL (0.0ms) UPDATE `refinery_settings` SET `value` = '--- \n:small: 120x120>\n:medium: 280x280>\n:large: 580x580>\n', `callback_proc_as_string` = NULL WHERE `refinery_settings`.`id` = 17 SQL (0.0ms) SHOW TABLES RefinerySetting Load (0.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` SQL (0.0ms) COMMIT ******** Engine Extend: app/helpers/blog_posts_helper SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (4.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES ******** Engine Extend: app/models/user SQL (0.0ms) describe `roles_users` SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (4.0ms) describe `roles_users` SQL (0.0ms) SHOW TABLES SQL (4.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES (etc, etc, etc) Which is what happens when rails "loads the world" when the app starts up. Basically, GC.start is re-loading the app for some reason. Because of this, between-request GC is much slower than inline GC. Is there a way around this? I would love to have snappy, between-request GC if possible. Thanks.

    Read the article

  • Loading city/state from SQL Server to Google Maps?

    - by knawlejj
    I'm trying to make a small application that takes a city & state and geocodes that address to a lat/long location. Right now I am utilizing Google Map's API, ColdFusion, and SQL Server. Basically the city and state fields are in a database table and I want to take those locations and get marker put on a Google Map showing where they are. This is my code to do the geocoding, and viewing the source of the page shows that it is correctly looping through my query and placing a location ("Omaha, NE") in the address field, but no marker, or map for that matter, is showing up on the page: function codeAddress() { <cfloop query="GetLocations"> var address = document.getElementById(<cfoutput>#Trim(hometown)#,#Trim(state)#</cfoutput>).value; if (geocoder) { geocoder.geocode( {<cfoutput>#Trim(hometown)#,#Trim(state)#</cfoutput>: address}, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { var marker = new google.maps.Marker({ map: map, position: results[0].geometry.location, title: <cfoutput>#Trim(hometown)#,#Trim(state)#</cfoutput> }); } else { alert("Geocode was not successful for the following reason: " + status); } }); } </cfloop> } And here is the code to initialize the map: var geocoder; var map; function initialize() { geocoder = new google.maps.Geocoder(); var latlng = new google.maps.LatLng(42.4167,-90.4290); var myOptions = { zoom: 5, center: latlng, mapTypeId: google.maps.MapTypeId.ROADMAP } var marker = new google.maps.Marker({ position: latlng, map: map, title: "Test" }); map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); } I do have a map working that uses lat/long that was hard coded into the database table, but I want to be able to just use the city/state and convert that to a lat/long. Any suggestions or direction? Storing the lat/long in the database is also possible, but I don't know how to do that within SQL.

    Read the article

  • Setting up SVN (subvsersion) to manage our companies files, how to exclude large files from being ve

    - by Roeland
    Me and two other guys recently started our own web development company. We each work from our homes and have decided we want to keep one central location for all of our files. These files include word documents, spreadsheets, client files, designs.. etc. Anything pertaining to our company. I have a pretty solid internet connection and a windows 2008 server box sitting at home so I set up a subversion repository. Our file repository will look something like this. Clients Company A Design (photoshop files, wireframes, concepts) Documents ( logins, quotes, proposals etc) Site Backups Company B Design Documents Site Backups Prospects Company C Company D Our Company Our Website Documents (contract, operating procudres) My question is in regards to design files. The photoshop files that my designer works with range in sizes from 10mb to 100mb. I don't think we need to keep these files version-ed as this would eat up space incredibly fast. How do I go about controlling which files get version-ed, and which files are just stored. What I am thinking is that all documents need to be version-ed, and any files other then that should not be. Any help would be appreciated, thanks! Edit I am also curious whether this is the way to go. I just like this system since it keeps version of all my documents and at the same time. Also essentially I will have 3 backups in 3 different locations (3 local copies) so no need for backing it up. I am unsure of how svn would perform as purely a huge file repository.

    Read the article

  • Podcast site - Serve audio files with CDN

    - by Bobe
    I am managing a small podcast website hosted on a shared server. Currently there are only eight or nine episodes, each of which are about 50 MB, so bandwidth is not really an issue at the moment. However, looking forward, would it be feasible to use a "free" CDN like Cloudflare to serve the audio files? If so, how would I set this up? I took a quick look at it before, and it seems you have to have your whole site routed (is that the right term?) through the CDN rather than just specific files or filetypes. I'd like some clarification on this.

    Read the article

  • reading parameters and files on browser, looking how to execute on server

    - by jbcolmenares
    I have a site done in Rails, which uses javascript to load files and generate forms for the user to input certain information. Those files and parameters are then to be used in a fortran code on the server. When the UI was on the server (using Qt), I would create a parameters file and execute the fortran code using threads so I wouldn't block the computer. Now that is web-based, I need to make the server and browser talk. What's the procedure for that? where should I start looking? I'm already using rails + javascript. I need that extra tool to do the talking, and no idea where to start.

    Read the article

  • In SQL Server what is most efficient way to compare records to other records for duplicates with in

    - by Glenn
    We have an SQL Server that gets daily imports of data files from clients. This data is interrelated and we are always scrubbing it and having to look for suspect duplicate records between these files. Finding and tagging suspect records can get pretty complicated. We use logic that requires some field values to be the same, allows some field values to differ, and allows a range to be specified for how different certain field values can be. The only way we've found to do it is by using a cursor based process, and it places a heavy burden on the database. So I wanted to ask if there's a more efficient way to do this. I've heard it said that there's almost always a more efficient way to replace cursors with clever JOINS. But I have to admit I'm having a lot of trouble with this one. For a concrete example suppose we have 1 table, an "orders" table, with the following 6 fields. order_id, customer_id product_id, quantity, sale_date, price We want to look through the records to find suspect duplicates on the following example criteria. These get increasingly harder. 1. Records that have the same product_id, sale_date, and quantity but different customer_id's should be marked as suspect duplicates for review. 2. Records that have the same customer_id, product_id, quantity and have sale_dates within five days of each other should be marked as suspect duplicates for review 3. Records that have the same customer_id, product_id, but different quantities within 20 units, and sales dates within five days of each other should be considered suspect. Is it possible to satisfy each one of these criteria with a single SQL Query that uses JOINS? Is this the most efficient way to do this?

    Read the article

  • Sql Server 2005 Database Tables - Row Comparison Column By Column.

    - by Goober
    Scenario I have an TWO datbase tables of exactly the SAME STRUCTURE. The difference between these tables is that one contains data populated by one application and the other is populated by a different application. Each application is trying to produce the same result, but using two different methods of implementation. Proposed Idea What I want to do, is run both applications, which will roughly produce 35000 rows containing 10 columns each - So all in all, 70000 rows of data, I then want to compare each row of data, COLUMN BY COLUMN to check whether the values are the same or not. Current Thoughts Since there is so much data to compare, I feel that the best way in which to do this would be to write an application, preferably in C# (but if necessary, T-sql), to compare each row of data column by column, and write out any failed comparisons to a text log file. Question Could anybody suggest an efficient way in which to perform column by column row comparison for 70000 rows worth of data? I'm struggling for ideas on how to tackle this problem. Extra Detail The two applications are both written in C# .Net 3.5. The Database is running on Sql Server 2005. Help greatly appreciated.

    Read the article

  • What is the fastest way to get a DataTable into SQL Server?

    - by John Gietzen
    I have a DataTable in memory that I need to dump straight into a SQL Server temp table. After the data has been inserted, I transform it a little bit, and then insert a subset of those records into a permanent table. The most time consuming part of this operation is getting the data into the temp table. Now, I have to use temp tables, because more than one copy of this app is running at once, and I need a layer of isolation until the actual insert into the permanent table happens. What is the fastest way to do a bulk insert from a C# DataTable into a SQL Temp Table? I can't use any 3rd party tools for this, since I am transforming the data in memory. My current method is to create a parameterized SqlCommand: INSERT INTO #table (col1, col2, ... col200) VALUES (@col1, @col2, ... @col200) and then for each row, clear and set the parameters and execute. There has to be a more efficient way. I'm able to read and write the records on disk in a matter of seconds...

    Read the article

  • Is there a library / tool to query MySQL data files (MyISAM / InnoDB) without the server? (the SQLit

    - by MGW
    Oftentimes I want to query my MySQL data directly without a server running or without having access to the server (but having read / write rights to the files). Is there a tool or maybe even a library around to query MySQL data files like it is possible with SQLite? I'm specifically looking for InnoDB and MyISAM support. Performance is not a factor. I don't have any knowledge about MySQL internals, but I presume it should be possible to do and not too hard to get the specific code out? Thank you for any suggestions!

    Read the article

  • Shared Datasets in SQL Server 2008 R2

    This article leverages the examples and concepts explained in the Part I through Part IV of the spatial data series which develops a "BI-Satellite" app. Overview In the spatial data series we ... [Read Full Article]

    Read the article

  • All files erased after installing ubuntu 11.04 alpha 3

    - by wifi
    Yeah I know I should have backed up my files before proceeding, I completely forgot. Well, the thing is that I had a dual-boot system with Windows 7 and Ubuntu 10.10. Yesterday, I installed Ubuntu 11.04 alpha 3 (through live USB). I chose that 11.04 would install over 10.10 on the installation wizard, where I have no important files. However, it overwrote Windows too, and its data. Is there some way to recover it? Thanks!

    Read the article

  • Search For a Query in RDL Files with PowerShell

    - by AllenMWhite
    In tracking down poorly performing queries for clients I often encounter the query text in a trace file I've captured, but don't know the source of the query. I've found that many of the poorest performing queries are those written into the reports the business users need to make their decisions. If I can't figure out where they came from, usually years after the queries were written, I can't fix them. First thing I did was find a great utility called RSScripter , which opens up a Windows dialog...(read more)

    Read the article

  • Avoid External Dependencies in SQL Server Triggers

    I sometimes want to perform auditing or other actions in a trigger based on some criteria. More specifically, there are a few cases that may warrant an e-mail; for example, if a web sale takes place that requires custom or overnight shipping and handling. It is tempting to just add code to the trigger that sends an e-mail when these criteria are met. But this can be problematic for two reasons: (1) your users are waiting for that processing to occur, and (2) if you can't send the e-mail, how do you decide whether or not to roll back the transaction, and how do you bring the problem to the attention of the administrator?

    Read the article

  • How do you bind SQL Data to a .NET DataGridView?

    - by Jordan S
    I am trying to bind a table in an SQL database to a DataGridView Control. I would like to make it so that when the user enters a new line of data in the DataGridView that a record is automatically added to the database. Is there a way to do this using LINQ to SQL? I have tried using the code below but after I add a new entry I dont think the data gets added to the DB. Please Help! BOMClassesDataContext DB = new BOMClassesDataContext(); var mfrs = from m in DB.Manufacturers select m; BindingSource bs = new BindingSource(); bs.DataSource = mfrs; dataGridView1.DataSource = bs; I tried adding DB.SubmitChanges() to the CellValueChanged eventhandler and that partially works. If I click the bottom empty row it automatically fills in the ID (identity) column of the table with a "0" instead of the next unused value. If I change that value manually to the next available then it adds the new record fine but if I leave it at 0 it does nothing. How can i fix this?

    Read the article

  • Running SQL Server Jobs using a Proxy Account

    In most companies, roles and responsibilities are clearly defined for the various teams, whether it is the database team, application team or the development team. In some cases, the application team might own a number of jobs but they ... [Read Full Article]

    Read the article

  • Steps to rollback database changes without impacting SQL Server Log Shipping

    When pushing a major release to a large production database, you want to know that you'll be able to rollback changes if the need arises. These are some simple steps which we can follow to ensure that we don't have to reconfigure log shipping all over again thereby saving time and ensuring systems are not affected when rolling back changes. Deployment Manager 2 is now free!The new version includes tons of new features and we've launched a completely free Starter Edition! Get Deployment Manager here

    Read the article

  • SSIS object search using T-SQL

    Easily determining what objects are located in all your SSIS packages can be a challenging endeavor. James Greaves brings us a technique that can help you determine which packages might need to be changed based on objects you alter in your database.

    Read the article

  • Writing files to an Airport Extreme using afp

    - by Bill Oldroyd
    Using Nautilus I can connect from Ubuntu 12.04 (64-bit) to my Apple Airport Extreme using user & password without a problem. I can read, browse folders and delete files. However I cannot write files, the file is created, but the contents of the file are not transferred. The transfer fails with the error message "kFPMiscErr" which I think means that "authentication has already been established" ?. I have tried the command line tools for access using AFP but these do not work either. Is there a solution to this problem ?

    Read the article

  • Generating a set of files containing dumps of individual tables in a way that guarantees database co

    - by intuited
    I'd like to dump a MySQL database in such a way that a file is created for the definition of each table, and another file is created for the data in each table. I'd like this to be done in a way that guarantees database integrity by locking the entire database for the duration of the dump. What is the best way to do this? Similarly, what's the best way to lock the database while restoring a set of these dump files? edit I can't assume that mysql will have permission to write to files.

    Read the article

  • SQL Saturday #169 - Denver

    Come join Steve Jones, Glenn Berry, and other Denver area MVPs and speakers for a free day of training in Denver on Sept 22, 2012. Keep your database and application development in syncSQL Connect is a Visual Studio add-in that brings your databases into your solution. It then makes it easy to keep your database in sync, and commit to your existing source control system. Find out more.

    Read the article

  • Options for storing large text blobs in/with an SQL database?

    - by kdt
    Hi, I have some large volumes of text (log files) which may be very large (up to gigabytes). They are associated with entities which I'm storing in a database, and I'm trying to figure out whether I should store them within the SQL database, or in external files. It seems like in-database storage may be limited to 4GB for LONGTEXT fields in MySQL, and presumably other DBs have similar limits. Also, storing in the database presumably precludes any kind of seeking when viewing this data -- I'd have to load the full length of the data to render any part of it, right? So it seems like I'm leaning towards storing this data out-of-DB: are my misgivings about storing large blobs in the database valid, and if I'm going to store them out of the database then are there any frameworks/libraries to help with that? (I'm working in python but am interested in technologies in other languages too)

    Read the article

  • Ubuntu One Files for Android will not let me log in

    - by user20867
    I installed Ubuntu One Files on my Nexus One phone. When I tap Log in on the main screen, the app tries to log in then after a few seconds returns the following message: Log-in failed, please try again later. I have an Ubuntu One account, and when I tap Register on the main screen for Ubuntu One Files, I can log in using my phone's Web browser. But if I go back to the app and try to log in, I get the same error. Again, my phone is a Nexus One running Android 2.3.4. The phone is not rooted or modded in any way.

    Read the article

< Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >