Search Results

Search found 79317 results on 3173 pages for 'sql error messages'.

Page 508/3173 | < Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >

  • What strategy should be employed to access Facebook data offline?

    - by user686021
    I'm working on a project similar to Klout which provides detail about how you influence other people and who influenced you. We'll be fetching data from few social networking sites (i.e linked in, facebook, twitter etc) to analyze how users interacts with one another. For that we need to parse the data and store it in db and have to analyze it so that strength of relation of two user can be decided. We'll be accessing data offline as well to provide them with accurate results. If we consider facebook activities, we need to have access to Facebook users' news feed, wall data which includes likes,comments,shares etc. To decide how one user influence other, we'll store all the data and analyze it. I need suggestions on what steps need to be taken for great performance. We'll be using ASP.Net(C#) Web forms, SQL Server, jQuery. Main concern is parsing of data, it's storage and retrieval with least overhead. For that I've summarized few points as below : Should we switch over to document-oriented database, like MongoDB or RavenDB for the whole app or part of it even though none of team member have experience with them? Should we use SQL Server Analysis service? Is there any other library than Json.NET for parsing data? Is it advisable to use any C# library over FQL + GET Request ? I've tried to provide as much info as possible. Please share your views for the same.

    Read the article

  • SQL Server Integration Services Connection Manager Tips and Tricks

    In this article, we will take a look at the following Tips and Tricks for Connection Managers: Adding an "Application Name" property to the connection string; Creating Two Connection Managers for each Database Connection; and Capturing Connection Manager details in Package Configurations Deployment Manager 2 is now free!The new version includes tons of new features and we've launched a completely free Starter Edition! Get Deployment Manager here

    Read the article

  • How to receive HTTP messages using Socket

    - by Poma
    I'm using Socket class for my web client. I can't use HttpWebRequest since it doesn't support socks proxies. So I have to parse headers and handle chunked encoding by myself. The most difficult thing is to determine length of content so I have to read it byte-by-byte. First I have to use ReadByte() to find last header ("\r\n\r\n" combination), then read chunk's size etc. But this approach has very poor performance. Can you suggest better solution? Maybe some open source examples or libraries that handle http request through sockets (not very big and complicated though, I'm a noob)

    Read the article

  • Oneiric updates cause "Window Creation Error" in Second Life client

    - by Yuttadhammo
    With the latest Oneiric updates, I've been unable to use Second Life, a virtual reality simulator, in Ubuntu. I assume it has something to do with the NVidia driver, since the error in the console says: WARNING: createWindow: LLWindowManager::create() : Error creating window. WARNING: LLViewerWindow: Failed to create window, to be shutting Down, be sure your graphics driver is updated I guess I could file a bug on Launchpad, but I'm not sure where to file it. Does anyone have any insight into what may be causing this problem? It was working fine a week ago, and older viewers (Imprudence viewer anyway) still work fine.

    Read the article

  • C error expected specifier-qualifier-list before ‘time_t’

    - by ambika
    I got the error from error.c:31: /usr/include/ap/mas.h:254: error: expected specifier-qualifier-list before ‘time_t’ make: *** [error.o] Error 1 Feedback We at least need to see line 31 of error.c and line 254 of mas.h, with preferably a few lines of context around each. This error may have nothing to do with how time_t is being declared. – John Bode Then I check in error.c (line no 31) -- #include "mas.h" then I check line no 254 in mas.h. in mas.h #include <sys/types.h> typedef struct _x{ time_t time; }x; Can anybody suggest where I am going wrong?

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Missing error handling in Streaming-AJAX-Proxy Log

    - by Michael Freidgeim
    We are using AjaxProxy(FROM http://www.codeproject.com/KB/ajax/ajaxproxy.aspx) on our web site, but started to notice errors accessing log.txt file. I found that the file is created by Log class and doesn't have ability to switch it off and error handling. I've added reading file name from configuration and try/catch block   public static class Log     {         private static StreamWriter logStream;         private static object lockObject = new object ();     public static void WriteLine(string msg)         {                       string logFileName = ConfigurationExtensions.GetAppSetting("AjaxStreamingProxy.LogFile" ,"");                       if (logFileName.IsNullOrEmpty())                             return;                       try                      {                             if (logStream == null )                            {                                    lock (lockObject)                                   {                                           if (logStream == null )                                          {                            logStream = File.AppendText(Path .Combine(AppDomain.CurrentDomain.BaseDirectory, logFileName));                                          }                                   }                            }                            logStream.WriteLine(msg);                      }                       catch (Exception exc)                      {                             string ignoredMsg = String .Format("The error occured while logging {0}, but processing will continue.\n {1} ", exc);                             LoggerHelper.LogEvent(ignoredMsg, MyCategorySource, TraceEventType .Warning, true);                      }         }

    Read the article

  • Between-request Garbage Collection using Passenger

    - by raphaelcm
    We're using Rails 3.0.7 and REE 1.8.7. Long-term, we will be upgrading, but at the moment it's not feasible. Following the advice of several blog posts, we've been tuning our GC, and have settings that work pretty well. But we would really like to run GC outside of the request-response cycle. I've tried patching Passenger per this post, and using the code supplied in this SO question. In both cases, GC does indeed happen between requests. However, every time the between-request GC happens, I see a bunch of this: MONGODB [INFO] Connecting... MONGODB admin['$cmd'].find({:ismaster=>1}).limit(-1) MONGODB admin['$cmd'].find({:ismaster=>1}).limit(-1) MONGODB admin['$cmd'].find({:ismaster=>1}).limit(-1) Starting the New Relic Agent. Installed New Relic Browser Monitoring middleware SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES RefinerySetting Load (0.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` WHERE `refinery_settings`.`scoping` = 'pages' AND `refinery_settings`.`name` = 'use_marketable_urls' LIMIT 1 SQL (0.0ms) BEGIN RefinerySetting Load (0.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` WHERE `refinery_settings`.`id` = 1 LIMIT 1 AREL (0.0ms) UPDATE `refinery_settings` SET `value` = '--- \"false\"\n', `callback_proc_as_string` = NULL WHERE `refinery_settings`.`id` = 1 SQL (0.0ms) SHOW TABLES RefinerySetting Load (0.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` SQL (0.0ms) COMMIT SQL (0.0ms) SHOW TABLES RefinerySetting Load (4.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` WHERE `refinery_settings`.`scoping` IS NULL AND `refinery_settings`.`name` = 'user_image_sizes' LIMIT 1 SQL (0.0ms) BEGIN RefinerySetting Load (0.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` WHERE `refinery_settings`.`id` = 17 LIMIT 1 AREL (0.0ms) UPDATE `refinery_settings` SET `value` = '--- \n:small: 120x120>\n:medium: 280x280>\n:large: 580x580>\n', `callback_proc_as_string` = NULL WHERE `refinery_settings`.`id` = 17 SQL (0.0ms) SHOW TABLES RefinerySetting Load (0.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` SQL (0.0ms) COMMIT ******** Engine Extend: app/helpers/blog_posts_helper SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (4.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES ******** Engine Extend: app/models/user SQL (0.0ms) describe `roles_users` SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (4.0ms) describe `roles_users` SQL (0.0ms) SHOW TABLES SQL (4.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES (etc, etc, etc) Which is what happens when rails "loads the world" when the app starts up. Basically, GC.start is re-loading the app for some reason. Because of this, between-request GC is much slower than inline GC. Is there a way around this? I would love to have snappy, between-request GC if possible. Thanks.

    Read the article

  • "MOZILLA_FIVE_HOME not set" zekr quran study software java error

    - by Acess Denied
    I have installed zekr Quran study software on ubuntu 12.04 and I upgraded to 12.10. then the zekr app has been giving me this error when ever I start it. org.eclipse.swt.SWTError: No more handles [Unknown Mozilla path (MOZILLA_FIVE_HOME not set)] at org.eclipse.swt.SWT.error(SWT.java:4387) at org.eclipse.swt.browser.Mozilla.initMozilla(Mozilla.java:1939) at org.eclipse.swt.browser.Mozilla.create(Mozilla.java:699) at org.eclipse.swt.browser.Browser.<init>(Browser.java:99) at net.sf.zekr.ui.QuranForm.makeFrame(QuranForm.java:628) at net.sf.zekr.ui.QuranForm.init(QuranForm.java:340) at net.sf.zekr.ui.QuranForm.<init>(QuranForm.java:319) at net.sf.zekr.ZekrMain.startZekr(ZekrMain.java:51) at net.sf.zekr.ZekrMain.main(ZekrMain.java:94) Please Advice me

    Read the article

  • Loading city/state from SQL Server to Google Maps?

    - by knawlejj
    I'm trying to make a small application that takes a city & state and geocodes that address to a lat/long location. Right now I am utilizing Google Map's API, ColdFusion, and SQL Server. Basically the city and state fields are in a database table and I want to take those locations and get marker put on a Google Map showing where they are. This is my code to do the geocoding, and viewing the source of the page shows that it is correctly looping through my query and placing a location ("Omaha, NE") in the address field, but no marker, or map for that matter, is showing up on the page: function codeAddress() { <cfloop query="GetLocations"> var address = document.getElementById(<cfoutput>#Trim(hometown)#,#Trim(state)#</cfoutput>).value; if (geocoder) { geocoder.geocode( {<cfoutput>#Trim(hometown)#,#Trim(state)#</cfoutput>: address}, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { var marker = new google.maps.Marker({ map: map, position: results[0].geometry.location, title: <cfoutput>#Trim(hometown)#,#Trim(state)#</cfoutput> }); } else { alert("Geocode was not successful for the following reason: " + status); } }); } </cfloop> } And here is the code to initialize the map: var geocoder; var map; function initialize() { geocoder = new google.maps.Geocoder(); var latlng = new google.maps.LatLng(42.4167,-90.4290); var myOptions = { zoom: 5, center: latlng, mapTypeId: google.maps.MapTypeId.ROADMAP } var marker = new google.maps.Marker({ position: latlng, map: map, title: "Test" }); map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); } I do have a map working that uses lat/long that was hard coded into the database table, but I want to be able to just use the city/state and convert that to a lat/long. Any suggestions or direction? Storing the lat/long in the database is also possible, but I don't know how to do that within SQL.

    Read the article

  • In SQL Server what is most efficient way to compare records to other records for duplicates with in

    - by Glenn
    We have an SQL Server that gets daily imports of data files from clients. This data is interrelated and we are always scrubbing it and having to look for suspect duplicate records between these files. Finding and tagging suspect records can get pretty complicated. We use logic that requires some field values to be the same, allows some field values to differ, and allows a range to be specified for how different certain field values can be. The only way we've found to do it is by using a cursor based process, and it places a heavy burden on the database. So I wanted to ask if there's a more efficient way to do this. I've heard it said that there's almost always a more efficient way to replace cursors with clever JOINS. But I have to admit I'm having a lot of trouble with this one. For a concrete example suppose we have 1 table, an "orders" table, with the following 6 fields. order_id, customer_id product_id, quantity, sale_date, price We want to look through the records to find suspect duplicates on the following example criteria. These get increasingly harder. 1. Records that have the same product_id, sale_date, and quantity but different customer_id's should be marked as suspect duplicates for review. 2. Records that have the same customer_id, product_id, quantity and have sale_dates within five days of each other should be marked as suspect duplicates for review 3. Records that have the same customer_id, product_id, but different quantities within 20 units, and sales dates within five days of each other should be considered suspect. Is it possible to satisfy each one of these criteria with a single SQL Query that uses JOINS? Is this the most efficient way to do this?

    Read the article

  • How to get nicer error-messages in this bash-script?

    - by moata_u
    I'm trying to catch any error when run a command in order to write a log-file / report I've tried this code: function valid (){ if [ $? -eq 0 ]; then echo "$var1" ": status : OK" else echo "$var1" ": status : ERROR" fi } function save(){ sed -i "/:@/c connection.url=jdbc:oracle:thin:@$ip:1521:$dataBase" $search var1="adding database ip" valid $var1 sed -i "/connection.username/c connection.username=$name" #$search var1="addning database SID" valid $var1 } save The output looks like this: adding database ip : status : OK sed: no input file But I want it to look like this: adding database ip : status : OK sed: no input file : status : ERROR" or this: adding database ip : status : OK addning database SID : status : ERROR" I've been trying, but it's not working with me. :(

    Read the article

  • Can't copy file (I/O error) Ubuntu Server

    - by QxQ
    I'm running Ubuntu Server 12.04, and basically found out that a couple of files aren't behaving like they should. The file name is r.-1.-1.mca, and I've tried using this terminal command sudo cp r.-1.-1.mca ../ The hard drive is working/thinking for a while, then spits this back out: cp: reading `r.-1.-1.mca': Input/output error cp: failed to extend `../r.-1.-1.mca': Input/output error I have another file behaving like this. My guess is that these files are corrupt, so I tried running the server in recovery mode and it told me to check the file system. However, it didn't come up with any errors. Is there a way to repair these files so I can use them normally again?

    Read the article

  • unfounded Secure Unsecure Messages

    - by Marty Trenouth
    I'm having significant difficulty locating the root cause for a secure/insecure message comming from IE. I've looked through the entire output and there are NO references to http: I've searched for unsource Iframes, which cause this message, and there are none and other than jquery 1.4 there isn't even the text "iframe" in the source. I'm almost at an end trying the cause for this. Does anyone have any ideas

    Read the article

  • Sql Server 2005 Database Tables - Row Comparison Column By Column.

    - by Goober
    Scenario I have an TWO datbase tables of exactly the SAME STRUCTURE. The difference between these tables is that one contains data populated by one application and the other is populated by a different application. Each application is trying to produce the same result, but using two different methods of implementation. Proposed Idea What I want to do, is run both applications, which will roughly produce 35000 rows containing 10 columns each - So all in all, 70000 rows of data, I then want to compare each row of data, COLUMN BY COLUMN to check whether the values are the same or not. Current Thoughts Since there is so much data to compare, I feel that the best way in which to do this would be to write an application, preferably in C# (but if necessary, T-sql), to compare each row of data column by column, and write out any failed comparisons to a text log file. Question Could anybody suggest an efficient way in which to perform column by column row comparison for 70000 rows worth of data? I'm struggling for ideas on how to tackle this problem. Extra Detail The two applications are both written in C# .Net 3.5. The Database is running on Sql Server 2005. Help greatly appreciated.

    Read the article

  • What is the fastest way to get a DataTable into SQL Server?

    - by John Gietzen
    I have a DataTable in memory that I need to dump straight into a SQL Server temp table. After the data has been inserted, I transform it a little bit, and then insert a subset of those records into a permanent table. The most time consuming part of this operation is getting the data into the temp table. Now, I have to use temp tables, because more than one copy of this app is running at once, and I need a layer of isolation until the actual insert into the permanent table happens. What is the fastest way to do a bulk insert from a C# DataTable into a SQL Temp Table? I can't use any 3rd party tools for this, since I am transforming the data in memory. My current method is to create a parameterized SqlCommand: INSERT INTO #table (col1, col2, ... col200) VALUES (@col1, @col2, ... @col200) and then for each row, clear and set the parameters and execute. There has to be a more efficient way. I'm able to read and write the records on disk in a matter of seconds...

    Read the article

  • TB3.0.10: Messages being sent back to me

    - by punkinette64
    Hello, I don't know if this will make any sense to you; but here goes. When I send an email and I get a response, more often than not, in the 'to:' is my email address. However, my email address also ends up in the 'from' column.The 'reply-from' address is nowhere to be found;so I don't have any address with which I can send my reply, as both of them have just my email address. What am I doing wrong here? Is there some set-up in my TOOLOPTIONS set-up incorrectly? The set-up in OPTIONS is pretty difficult to understand and it doesn't offer any 'to' or 'reply-from' choices. This is critical because I just cannot answer my emails because there is no one I can send a reply to. Please, please try to help me. If you have the answer, please email me at: [email protected] Thank you. Blessings. Shiloh

    Read the article

  • Shared Datasets in SQL Server 2008 R2

    This article leverages the examples and concepts explained in the Part I through Part IV of the spatial data series which develops a "BI-Satellite" app. Overview In the spatial data series we ... [Read Full Article]

    Read the article

  • TB3.0.10: Messages being sent back to me

    - by punkinette64
    Hello, I don't know if this will make any sense to you; but here goes. When I send an email and I get a response, more often than not, in the 'to:' is my email address. However, my email address also ends up in the 'from' column.The 'reply-from' address is nowhere to be found;so I don't have any address with which I can send my reply, as both of them have just my email address. What am I doing wrong here? Is there some set-up in my TOOLOPTIONS set-up incorrectly? The set-up in OPTIONS is pretty difficult to understand and it doesn't offer any 'to' or 'reply-from' choices. This is critical because I just cannot answer my emails because there is no one I can send a reply to. Please, please try to help me. If you have the answer, please email me at: [email protected] Thank you. Blessings. Shiloh

    Read the article

  • How to teach your users/customers to send better error descriptions

    - by ckeller
    I often have to deal with customers or users which are reporting errors in applications. Most of the time their content is something useless as ERROR!!! x does not work without much more information. For resolving the issue I have to request every single detail of them, which often is more time consuming than fixing the issue itself. Other send information in formats which are not ideal, like screenshots (of data records, not of errors) although they could send a link (we have access to the systems) and so on. How do you tell your users/customers to describe the problems with more details so that the whole process could be easier for both sides? edit This question is more about the social skills, than how to achieve programmatic collection of logs and error information. I'm aware of the fact that this should be part of good software design.

    Read the article

  • Design pattern to separate messages from actual process.

    - by Manish Gupta
    I am having a C# application to sync data between PC and palm devices. There are codes written like below: showMessage("synchronizing Table1"); Sync(destTable1,sourceTable1); Sync(destTable2,sourceTable2); showMessage("synchronizing Table2"); // more code How do I separate the actual process of synchronizing from displaying message? Which design pattern to follow? Thanks in advance...

    Read the article

  • Avoid External Dependencies in SQL Server Triggers

    I sometimes want to perform auditing or other actions in a trigger based on some criteria. More specifically, there are a few cases that may warrant an e-mail; for example, if a web sale takes place that requires custom or overnight shipping and handling. It is tempting to just add code to the trigger that sends an e-mail when these criteria are met. But this can be problematic for two reasons: (1) your users are waiting for that processing to occur, and (2) if you can't send the e-mail, how do you decide whether or not to roll back the transaction, and how do you bring the problem to the attention of the administrator?

    Read the article

  • How do you bind SQL Data to a .NET DataGridView?

    - by Jordan S
    I am trying to bind a table in an SQL database to a DataGridView Control. I would like to make it so that when the user enters a new line of data in the DataGridView that a record is automatically added to the database. Is there a way to do this using LINQ to SQL? I have tried using the code below but after I add a new entry I dont think the data gets added to the DB. Please Help! BOMClassesDataContext DB = new BOMClassesDataContext(); var mfrs = from m in DB.Manufacturers select m; BindingSource bs = new BindingSource(); bs.DataSource = mfrs; dataGridView1.DataSource = bs; I tried adding DB.SubmitChanges() to the CellValueChanged eventhandler and that partially works. If I click the bottom empty row it automatically fills in the ID (identity) column of the table with a "0" instead of the next unused value. If I change that value manually to the next available then it adds the new record fine but if I leave it at 0 it does nothing. How can i fix this?

    Read the article

< Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >