Search Results

Search found 30117 results on 1205 pages for 'thread specific storage'.

Page 365/1205 | < Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >

  • Windows Azure: Creating a subdirectories inside the blob

    - by veda
    I wanted to create some subdirectories inside my blob. But it is not working out well Here is my code protected void ButUpload_click(object sender, EventArgs e) { // store upladed file as a blob storage if (uplFileUpload.HasFile) { name = uplFileUpload.FileName; // get refernce to the cloud blob container CloudBlobContainer blobContainer = cloudBlobClient.GetContainerReference("documents"); if (textbox.Text != "") { name = textbox.Text + "/" + name; } // set the name for the uploading files string UploadDocName = name; // get the blob reference and set the metadata properties CloudBlockBlob blob = blobContainer.GetBlockBlobReference(UploadDocName); blob.Metadata["FILETYPE"] = "text"; blob.Properties.ContentType = uplFileUpload.PostedFile.ContentType; // upload the blob to the storage blob.UploadFromStream(uplFileUpload.FileContent); } } What I did is that, If I have to create a sub directory, I will enter the name of the sub directory in the textbox. for example, if I need to create a file named "test.txt" inside the sub directory "files" Then, my textbox.text = files and uplFileUpload.FileName = test.txt Now I will concatenate them and upload to the blob.. But it is not working well.. I am getting just https://test.core.windows.net/documents/files/ I am not getting the entire thing I was expecting https://test.core.windows.net/documents/files/test.txt What am I doing wrong... How to create sub directories inside the blob.

    Read the article

  • DuplicateKeyException in LINQ, but I've set auto increment and auto sync

    - by Fritos
    I'm getting a DuplicateKeyException error in my C# code. I've set Auto Generated = true, and Auto-Sync = OnInsert in my dbml. I'm not even touching the PK field in any manually written code (as seen below [My primary key field is actually called PK]). using (DeviceExerciseDataDataContext context = new DeviceExerciseDataDataContext()) { foreach(Data tgudData in data.Data) { tgd = new tableData(); tgd.FK = key; tgd.Time = tgudData.TimeStamp; tgd.Calories = Convert.ToInt32(tgudData.Calories); tgd.HeartRate = tgudData.AvgHr; tgd.BenchAngle = tgudData.Angle; tgd.WorkoutTarget = 0; tgd.Reps = tgudData.Reps; context.tableDatas.InsertOnSubmit(tgd); } context.SubmitChanges(); } This is the code for the column in the designer (columns are named PK and FK) [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_PK", AutoSync=AutoSync.OnInsert, DbType="Int NOT NULL", IsPrimaryKey=true, IsDbGenerated=true)] public int PK { get { return this._PK; } set { if ((this._PK != value)) { this.OnPKChanging(value); this.SendPropertyChanging(); this._PK = value; this.SendPropertyChanged("PK"); this.OnPKChanged(); } } } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_FK", DbType="Int")] public System.Nullable<int> FK { get { return this._FK; } set { if ((this._FK != value)) { this.OnFKChanging(value); this.SendPropertyChanging(); this._FK = value; this.SendPropertyChanged("FK"); this.OnFKChanged(); } } }

    Read the article

  • Kohana Sessions data does not persist across pages in chrome and ir browsers

    - by user1062637
    Kohana Session data does not persist across pages opened in Chrome and IE browsers the same works fine in a Firefox browser Kohana version used is 2.3 session config files hold $config['driver'] = 'native'; /** * Session storage parameter, used by drivers. */ $config['storage'] = ''; /** * Session name. * It must contain only alphanumeric characters and underscores. At least one letter must be present. */ $config['name'] = 'NITWSESSID'; /** * Session parameters to validate: user_agent, ip_address, expiration. */ $config['validate'] = array(); /** * Enable or disable session encryption. * Note: this has no effect on the native session driver. * Note: the cookie driver always encrypts session data. Set to TRUE for stronger encryption. */ $config['encryption'] = FALSE; /** * Session lifetime. Number of seconds that each session will last. * A value of 0 will keep the session active until the browser is closed (with a limit of 24h). */ $config['expiration'] = 2700; /** * Number of page loads before the session id is regenerated. * A value of 0 will disable automatic session id regeneration. */ $config['regenerate'] = 0; /** * Percentage probability that the gc (garbage collection) routine is started. */ $config['gc_probability'] = 2; Help needed urgently

    Read the article

  • array of structures, or structure of arrays?

    - by Jason S
    Hmmm. I have a table which is an array of structures I need to store in Java. The naive don't-worry-about-memory approach says do this: public class Record { final private int field1; final private int field2; final private long field3; /* constructor & accessors here */ } List<Record> records = new ArrayList<Record>(); If I end up using a large number ( 106 ) of records, where individual records are accessed occasionally, one at a time, how would I figure out how the preceding approach (an ArrayList) would compare with an optimized approach for storage costs: public class OptimizedRecordStore { final private int[] field1; final private int[] field2; final private long[] field3; Record getRecord(int i) { return new Record(field1[i],field2[i],field3[i]); } /* constructor and other accessors & methods */ } edit: assume the # of records is something that is changed infrequently or never I'm probably not going to use the OptimizedRecordStore approach, but I want to understand the storage cost issue so I can make that decision with confidence. obviously if I add/change the # of records in the OptimizedRecordStore approach above, I either have to replace the whole object with a new one, or remove the "final" keyword. kd304 brings up a good point that was in the back of my mind. In other situations similar to this, I need column access on the records, e.g. if field1 and field2 are "time" and "position", and it's important for me to get those values as an array for use with MATLAB, so I can graph/analyze them efficiently.

    Read the article

  • mysql: storing arbitrary data

    - by Hailwood
    Background: I was asking a question on stack overflow regarding creating tables on the fly where this conversation ensued: This smells like a terrible idea! In fact, it smells just like this one. What in the world do you want to use this for? – deceze @deceze: very true, However, How else would you store the contents of these CSV files. They must be stored in mysql for indexing. The only solid fact about them is that they all have a mobile column with a standard format. The CSV can have an arbitrary amount of columns with an arbitrary amount of rows. They can (with no exaggeration) range from a single row, 35 column csv to an 80k row single column CSV. I am open to other ideas. – Hailwood There are many solutions for this, from attribute-value schemas to JSON storage and NoSQL storage. Open a new question about it. Whatever you do though, don't dynamically create tables! – deceze Question: So my question is, What would you say is the best way to store this data? Are you in agreement with deceze about not creating dynamic tables?

    Read the article

  • How to check whether user is login in web application?

    - by Morgan Cheng
    I want to learn the whole details of web application authentication. So, I decided to write a CodeIgniter authentication library from scratch. Now, I have to make design decision about how to determine whether one user is login. Basically, after user input username & password pair. A cookie is set for this session, following navigations in the web application will not require username & password. The server side will check whether the session cookie is valid to determine whether current user is login. The question is: how to determine whether cookie is valid cookie issued from server side? I can image the most simple way is to have the cookie value stored in session status as well. For each HTTP request, compare the value from cookie and the value from server session. (Since CodeIgniter session library store session variables in cookies, it is not applicable without some tweak.) This method requires storage in server side. For huge web application that is deployed in multiple datacenters. It is possible that user input username & password when browsing in one datacenter, while he/she access the web application in another datacenter later. The expected behavior is that user just input username & password once. As a result, all datacenters should be able to access the session status. That is possible not applicable even the session status is stored in external storage such as database. I tried Google. I login Google with Asian proxy which is supposed to direct me to datacenters in Asian. Then I switch to North American proxy which should direct me to datacenters in North America. It recognize my login without asking username and password again. So, is there any way to determine whether user is login without server side session status?

    Read the article

  • SqlServer slow on production environment

    - by Lieven Cardoen
    I have a weird problem in a production environment at a customer. I can't give any details on the infrastructure except that sql server runs on a virtual server and the data, log and filestream file are on another storage server (data and filestream together and log on a seperate server). Now, there's this query that when we run it gives these durations (first we clear the cache): 300ms, 20ms, 15ms, 17ms,... First time it takes longer, but from then on it is cached. At the customer, on a sql server that is more powerfull, these are the durations (I didn't have the rights to clear the cache. Will try this tomorrow). 2500ms, 2600ms, 2400ms, ... The query can be improved, that's right, but that's not the question here. How would you tackle this? I don't know where to go from here. The servers at this customer are really more powerfull but they do have virtual servers (we don't). What could be the cause... - Not enough memory? - Fragmentation? - Physical storage? I know it can be a lot of things, but maybe some of you have got some info for me on how to go on with this issue...

    Read the article

  • Need some clarification on the ANSI/SPARC 3-tier database architecture.

    - by Moonshield
    Hi there, I'm currently revising for a databases exam and looking over some past papers, but there's one question that I'm slightly unsure about and was wondering if someone could offer some assistance. "Describe EACH of the THREE levels of the ANSI SPARC 3 level architecture. Your answer should include the purpose of EACH of the schemas, the level of abstraction they provide and the software tools that would be used to access and support them." As I understand it (although please correct me if I'm wrong): the internal schema specifies the physical storage of the data; the conceptual schema specifies the structure of the database and the domains; and the external schemas are how the database is viewed by "users" (applications, etc.). As for the abstraction, I understand that the conceptual layer means that the physical data storage can be altered without the end user being affected, likewise the The bit that I'm not sure about is what tools are used to access and support each layer. Would the internal schema be handled by the DBMS, the conceptual schema handled by some sort of DDL interpreter and the external schema handled by a DML interpreter (or have I misunderstood what each level does)? Any assistance would be greatly appreciated. Thanks, Moonshield

    Read the article

  • Java ArrayList remove dupes without sets

    - by Kieran
    I'm having problems removing duplicates from an ArrayList. It's for an assignment for college. Here's the code I have already: public int numberOfDiffWords() { ArrayList<String> list = new ArrayList<>(); for(int i=0; i<words.size()-1; i++) { for(int j=i+1; j<words.size(); j++) { if(words.get(i).equals(words.get(j))) { // do nothing } else { list.add(words.get(i)); } } } return list.size(); } The problem is in the numberOfDiffWords() method. The populate list method is working correctly, as my instructor has given me a sample string (containing 4465 words) to analyse - printing words.size() gives the correct result. I want to return the size of the new ArrayList with all duplicates removed. words is an ArrayList class attribute. UPDATE: I should have mentioned I'm only allowed to use dynamic indexed-based storage for this part of the assignment, which means no hash-based storage.

    Read the article

  • How can I prevent double file uploading with Amazon S3?

    - by Tony
    I decided to use Amazon S3 for document storage for an app I am creating. One issue I run into is while I need to upload the files to S3, I need to create a document object in my app so my users can perform CRUD actions. One solution is to allow for a double upload. A user uploads a document to the server my Rails app lives on. I validate and create the object, then pass it on to S3. One issue with this is progress indicators become more complicated. Using most out-of-the-box plugins would show the client that file has finished uploading because it is on my server, but then there would be a decent delay when the file was going from my server to S3. This also introduces unnecessary bandwidth (at least it does not seem necessary) The other solution I am thinking about is to upload the file directly to S3 with one AJAX request, and when that is successful, make a second AJAX request to store the object in my database. One issue here is that I would have to validate the file after it is uploaded which means I have to run some clean up code in S3 if the validation fails. Both seem equally messy. Does anyone have something more elegant working that they would not mind sharing? I would imagine this is a common situation with "cloud storage" being quite popular today. Maybe I am looking at this wrong.

    Read the article

  • Windows Phone 7, login screen redirect and a case for .exit?

    - by Jarrette
    I know this has been discussed ad nauseum, but I want to present my case.... 1. My start page in my app is login.xaml. The user logs in, the username and password are authenticated through my WCF service, the username is saved in isolated storage, and then the user is redirected to mainpage.xaml. When a user starts my app, and they already have a saved username in isolated storage, they are redirected to mainpage.xaml If the user hit's "back" hard button from mainpage.xaml, they are redirected to the login screen, which in turn redirects them back to the mainpage.xaml since they already have a saved local username. This is causing my app to fail certification currently since the user cannot hit the "back" button to exit the app from mainpage.xaml. My instinct here is to override the BackKeyPress in mainpage.xaml and exit the app somehow. By reading the other posts, I can see that this method is not available. My second idea was to somehow store a property in the app.xaml.cs page that would tell the app to exit when the login page is loaded and that property is set to true, but that seems a bit hacky as well.... Any ideas here?

    Read the article

  • Greasemonkey failing to GM_setValue()

    - by HonoredMule
    I have a Greasemonkey script that uses a Javascript object to maintain some stored objects. It covers quite a large volume of information, but substantially less than it successfully stored and retrieved prior to encountering my problem. One value refuses to save, and I can not for the life of me determine why. The following problem code: Works for other larger objects being maintained. Is presently handling a smaller total amount of data than previously worked. Is not colliding with any function or other object definitions. Can (optionally) successfully save the problem storage key as "{}" during code startup. this.save = function(table) { var tables = this.tables; if(table) tables = [table]; for(i in tables) { logger.log(this[tables[i]]); logger.log(JSON.stringify(this[tables[i]])); GM_setValue(tables[i] + "_" + this.user, JSON.stringify(this[tables[i]])); logger.log(tables[i] + "_" + this.user + " updated"); logger.log(GM_getValue(tables[i] + "_" + this.user)); } } The problem is consistently reproducible and the logging statments produce the following output in Firebug: Object { 54,10 = Object } // Expansion shows complete contents as expected, but there is one oddity--Firebug highlights the array keys in purple instead of the usual black for anonymous objects. {"54,10":{"x":54,"y":10,"name":"Lucky Pheasant"}} // The correctly parsed string. bookmarks_HonoredMule saved undefined I have tried altering the format of the object keys, to no effect. Further narrowing down the issue is that this particular value is successfully saved as an empty object ("{}") during code initialization, but skipping that also does not help. Reloading the page confirms that saving of the nonempty object truly failed. Any idea what could cause this behavior? I've thoroughly explored the possibility of hitting size constraints, but it doesn't appear that can be the problem--as previously mentioned, I've already reduced storage usage. Other larger objects save still, and the total number of objects, which was not high already, has further been reduced by an amount greater than the quantity of data I'm attempting to store here.

    Read the article

  • Commited memory goes to physical RAM or reserves space in the paging file?

    - by Sil
    When I do VirtualAlloc with MEM_COMMIT this "Allocates physical storage in memory or in the paging file on disk for the specified reserved memory pages" (quote from MSDN article http://msdn.microsoft.com/en-us/library/aa366887%28VS.85%29.aspx). All is fine up until now BUT: the description of Commited Bytes Counter says that "Committed memory is the physical memory which has space reserved on the disk paging file(s)." I also read "Windows via C/C++ 5th edition" and this book says that commiting memory means reserving space in the page file.... The last two cases don't make sense to me... If you commit memory, doesn't that mean that you commit to physical storage (RAM)? The page file being there for swaping out currently unused pages of memory in case memory gets low. The book says that when you commit memory you actually reserve space in the paging file. If this were true than that would mean that for a committed page there is space reserved in the paging file and a page frame in physical in memory... So twice as much space is needed ?! Isn't the page file's purpose to make the total physical memory larger than it actually is? If I have a 1G of RAM with a 1G page file = 2G of usable "physical memory"(the book also states this but right after that it says what I discribed at point 2). What am I missing? Thanks.

    Read the article

  • To display the images in mobile devices is it necessary that the images should resides on device in

    - by Shailesh Jaiswal
    I am devloping smart device application in C#. In this application I have some images in my application which I used to dispay on emulator from my application. To display the images on emulator I need to create the one folder of images which resides on the emulator. Only after that I am able to display the images in emulator. I am able to create the folder in emulator by using File-Configure-General-Shared Folder. For sharing the folder I am giving the path of the folder which contains the images. Once I share the folder the folder of images which resides in my application will get copied in emulator with the name "Storage Card". Now I need to use the path as Bitmap bmp=new Bitmap(@"/Storage Card/ImageName.jpg"); Now I am able to display the images in emulator. Can we display the images in the emulator without any image folder which resides on emultor (so that we dont need to place the image folder in emulator as in the above case by sharing the folder) ? If the answere is no then to run the application on different mobile devices we need to place the folder which contains the images on different mobile devices. Isnt it? If the answere is yes then how we can display the images on different mobile device from our application without placing any folder of images on mobile devices?

    Read the article

  • How important is it that models be consistent across project components?

    - by RonLugge
    I have a project with two components, a server-side component and a client-side component. For various reasons, the client-side device doesn't carry a fully copy of the database around. How important is it that my models have a 1:1 correlation between the two sides? And, to extend the question to my bigger concern, are there any time-bombs I'm going to run into down the line if they don't? I'm not talking about having different information on each side, but rather the way the information is encapsulated will vary. (Obviously, storage mechanisms will also vary) The server side will store each user, each review, each 'item' with seperate tables, and create links between them to gather data as necessary. The client side shouldn't have a complete user database, however, so rather than link against the user for gathering things like 'name', I'd store that on the review. In other words... --- Server Side --- Item: +id //Store stuff about the item User: +id +Name -Password Review: +id +itemId +rating +text +userId --- Device Side --- Item: +id +AverageRating Review: +id +rating +text +userId +name User: +id +Name //Stuff The basic idea is that certain 'critical' information gets moved one level 'up'. A user gets the list of 'items' relevant to their query, with certain review-orientation moved up (i. e. average rating). If they want more info, they query the detail view for the item, and the actual reviews get queried and added to the dataset (and displayed). If they query the actual review, the review gets queried and they pick up some additional user info along the way (maybe; I'm not sure if the user would have any use for any of the additional user information). My basic concern is that I don't wan't to glut the user's bandwidth or local storage with a huge variety of information that they just don't need, even if proper database normalizations suggests that information REALLY should be stored at a 'lower' level. I've phrased this as a fairly low-level conceptual issue because that's the level I'm trying to think / worry over, but if it matters I'm creating a PHP / MySQL server that provides data for a iOS / CoreData client.

    Read the article

  • C# language questions

    - by Water Cooler v2
    1) What is int? Is it any different from the struct System.Int32? I understand that the former is a C# alias (typedef or #define equivalant) for the CLR type System.Int32. Is this understanding correct? 2) When we say: IComparable x = 10; Is that like saying: IComparable x = new System.Int32(); But we can't new a struct, right? or in C like syntax: struct System.In32 *x; x=>someThing = 10; 3) What is String with a capitalized S? I see in Reflector that it is the sealed String class, which, of course, is a reference type, unlike the System.Int32 above, which is a value type. What is string, with an uncapitalized s, though? Is that also the C# alias for this class? Why can I not see the alias definitions in Reflector? 4) Try to follow me down this subtle train of thought, if you please. We know that a storage location of a particular type can only access properties and members on its interface. That means: Person p = new Customer(); p.Name = "Water Cooler v2"; // legal because as Name is defined on Person. but // illegal without an explicit cast even though the backing // store is a Customer, the storage location is of type // Person, which doesn't support the member/method being // accessed/called. p.GetTotalValueOfOrdersMade(); Now, with that inference, consider this scenario: int i = 10; // obvious System.object defines no member to // store an integer value or any other value in. // So, my question really is, when the integer is // boxed, what is the *type* it is actually boxed to. // In other words, what is the type that forms the // backing store on the heap, for this operation? object x = i;

    Read the article

  • fastest public web app framework for quick DB apps?

    - by Steve Eisner
    I'd like to pick up a new tech for my toolbox - something for rapid prototyping of web apps. Brief requirements: public access (not hosted on my machine) - like Google's appengine, etc no tricky configuration necessary to build a simple web app host DB access (small storage provided) including some kind of SQLish query language easy front end HTML templating ability to access as a JSON service C# or Java,PHP or Python - or a fun new language to learn is OK free! An example app, very simple: render an AJAXy editable (add/delete/edit/drag) list of rich-data list items via some template language, so I can quickly mock up a UI for a client. ie. I can do most of the work client-side, but need convenient back end to handle the permanent storage. (In fact I suppose it doesn't even need HTML templating if I can directly access a DB via AJAX calls.) I realize this is a bit vague but am wondering if anyone has recommendations. A Rails host might be best for this (but probably not free) or maybe App Engine, or some other choice I'm not aware of? I've been doing everything with heavyweight servers (ASP.NET etc) for so long that I'm just not up on the latest... Thanks - I'll follow up on comments if this isn't clear enough :)

    Read the article

  • Update paths of already-created Paperclip attachments

    - by Horace Loeb
    I used to have this buggy Paperclip config: class Photo < ActiveRecord::Base has_attached_file :image, :storage => :s3, :styles => { :medium => "600x600>", :small => "320x320>", :thumb => "100x100#" }, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml", :path => "/:style/:filename" end This is buggy because two images cannot have the same size and filename. To fix this, I changed the config to: class Photo < ActiveRecord::Base has_attached_file :image, :storage => :s3, :styles => { :medium => "600x600>", :small => "320x320>", :thumb => "100x100#" }, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml", :path => "/:style/:id_:filename" end Unfortunately this breaks all URLs to attachments I've already created. How can I update those file paths or otherwise get the URLs to work?

    Read the article

  • Email with extra '.com' behind sender email address

    - by CHT
    Currently I had a situation where I sent an email to [email protected], but when I receive mail from [email protected], it showed as [email protected], with extra '.com' behind the email address, this just happen within this week. Before this, I didn't change any setting, currently I am using Outlook 2010. When I checked the email in webmail, it also showed it as [email protected]. It seem that it has nothing to do with Outlook. However, I also tried on Thunderbird 16.0.1, but still the problem is the same. Has anyone experienced this before? Is the problem caused by the sender or receiver? Header Message as below: Return-Path: [email protected] Received: from colo4.roaringpenguin.com (not-assigned.privatedns.com [174.142.115.36] (may be forged)) by pioneerpos.com (8.12.11/8.12.11) with ESMTP id q9V6OsKU032650 for [email protected]; Wed, 31 Oct 2012 01:24:55 -0500 Received: from mail.pointsoft.com.tw (pointsoft.com.tw [59.124.242.126]) by colo4.roaringpenguin.com (8.14.3/8.14.3/Debian-9.4) with ESMTP id q9V6OmN0026374 for [email protected]; Wed, 31 Oct 2012 02:24:50 -0400 X-MimeOLE: Produced By Microsoft Exchange V6.5 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----_=_NextPart_001_01CDB730.6B3D5A51" Subject: =?big5?B?scTByrPmLblzpfM=?= Date: Wed, 31 Oct 2012 14:25:16 +0800 Message-ID: X-MS-Has-Attach: yes X-MS-TNEF-Correlator: thread-topic: =?big5?B?scTByrPmLblzpfM=?= thread-index: Ac23MH3YpZuLx2ejTYqR5PfoZ+IoBw== X-Priority: 1 Priority: Urgent Importance: high From: "Alice" [email protected] To: "Bob" [email protected] X-Spam-Score: undef - pointsoft.com.tw is whitelisted. X-CanIt-Geo: ip=59.124.242.126; country=TW; region=03; city=Taipei; latitude=25.0392; longitude=121.5250; http://maps.google.com/maps?q=25.0392,121.5250&z=6 X-CanItPRO-Stream: pioneerpos-com:default (inherits from rp-customers:default,base:default) X-Canit-Stats-ID: 02IhGoMJb - 2e7fa924443e - 20121031 X-CanIt-Archive-Cluster: irqpXI7aJGyo4Ewta7qVH399FOg X-Scanned-By: CanIt (www . roaringpenguin . com) on 174.142.115.36

    Read the article

  • Error 2013: Lost connection to MySQL server during query when executing CHECK TABLE FOR UPGRADE

    - by Dean Richardson
    I just upgraded Ubuntu from 11.10 to 12.04. My rails app now returns the (passenger) error "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) (Mysql2::Error)". I get a similar error when I try to access mysql at the command line on my Ubuntu server using mysql -u root -p. I have mysql-server 5.5 installed. I've checked and mysql is not running. When I try to restart it, it fails. Here are some key lines from the tail of /var/log/syslog after an attempted restart: dean@dgwjasonfried:/etc/mysql$ tail -f /var/log/syslog Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: /usr/bin/mysqlcheck: Got error: 2013: Lost connection to MySQL server during query when executing 'CHECK TABLE ... FOR UPGRADE' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: FATAL ERROR: Upgrade failed Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: molex_app_development.assets OK Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: molex_app_development.ecd_types OK Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5124]: Checking for insecure root accounts. Mar 7 08:55:27 dgwjasonfried kernel: [ 7551.769657] init: mysql main process (5064) terminated with status 1 Mar 7 08:55:27 dgwjasonfried kernel: [ 7551.769697] init: mysql respawning too fast, stopped Here is most of /etc/mysql/my.cnf: Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock Here is entries for some specific programs The following values assume you have at least 32M ram This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] Basic Settings user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking Instead of skip-networking the default is now to listen only on localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 And here are permissions for var/run/mysqld/mysqld.sock: srwxrwxrwx 1 mysql mysql 0 Mar 7 09:18 mysqld.sock I'd be grateful for any suggestions the community might have. I reviewed the related questions here and attempted some of the fixes offered but to no avail. Thanks! Dean Richardson Update: Thanks to quanta's suggestion, I looked at the /var/log/mysql/error.log file. I found error messages relating to pointers, fatal signals, and more stuff that I really couldn't make much sense of. I also found mysql man page references, however. One suggested that I try starting mysqld with the --innodb_force_recovery=# option, then attempt to dump (or drop) the offending/corrupted database or table. I worked through the escalating option levels one-by-one (innodb_force_recovery=1, innodb_force_recovery=2, etc.) This allowed me to successfully run mysql -u root -p from the command line and execute several commands. I was able to run queries on my production database, but any attempt to query, dump, or even drop my development database raised an error and led to me losing the connection to mysql. So I've made progress, but until I'm somehow able to drop or repair my development db I'm still unable to get my app to load. Any further advice or suggestions? Thanks! Dean Update: Right after running sudo mysqld --innodb_force_recover=1 from the command line, the error.log contains this: Right after retrying sudo mysqld --innodb_force_recover=1, The error.log file shows this: 130308 4:55:39 [Note] Plugin 'FEDERATED' is disabled. 130308 4:55:39 InnoDB: The InnoDB memory heap is disabled 130308 4:55:39 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130308 4:55:39 InnoDB: Compressed tables use zlib 1.2.3.4 130308 4:55:39 InnoDB: Initializing buffer pool, size = 128.0M 130308 4:55:39 InnoDB: Completed initialization of buffer pool 130308 4:55:39 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 130308 4:55:39 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 130308 4:55:40 InnoDB: Waiting for the background threads to start 130308 4:55:41 InnoDB: 1.1.8 started; log sequence number 10259220 130308 4:55:41 InnoDB: !!! innodb_force_recovery is set to 1 !!! 130308 4:55:41 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130308 4:55:41 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130308 4:55:41 [Note] Server socket created on IP: '127.0.0.1'. 130308 4:55:41 [Note] Event Scheduler: Loaded 0 events 130308 4:55:41 [Note] mysqld: ready for connections. Version: '5.5.29-0ubuntu0.12.04.2' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) Then after mysql -u root -p and mysql> drop database molex_app_development; ERROR 2013 (HY000): Lost connection to MySQL server during query mysql> the error.log contains: dean@dgwjasonfried:/var/log/mysql$ tail -f error.log /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f6a3ff9ecbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f6a1c004bd8): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 130308 4:55:39 [Note] Plugin 'FEDERATED' is disabled. 130308 4:55:39 InnoDB: The InnoDB memory heap is disabled 130308 4:55:39 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130308 4:55:39 InnoDB: Compressed tables use zlib 1.2.3.4 130308 4:55:39 InnoDB: Initializing buffer pool, size = 128.0M 130308 4:55:39 InnoDB: Completed initialization of buffer pool 130308 4:55:39 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 130308 4:55:39 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 130308 4:55:40 InnoDB: Waiting for the background threads to start 130308 4:55:41 InnoDB: 1.1.8 started; log sequence number 10259220 130308 4:55:41 InnoDB: !!! innodb_force_recovery is set to 1 !!! 130308 4:55:41 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130308 4:55:41 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130308 4:55:41 [Note] Server socket created on IP: '127.0.0.1'. 130308 4:55:41 [Note] Event Scheduler: Loaded 0 events 130308 4:55:41 [Note] mysqld: ready for connections. Version: '5.5.29-0ubuntu0.12.04.2' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 130308 4:58:23 [ERROR] Incorrect definition of table mysql.proc: expected column 'comment' at position 15 to have type text, found type char(64). 130308 4:58:23 InnoDB: Assertion failure in thread 140168992810752 in file fsp0fsp.c line 3639 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 10:58:23 UTC - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=151 thread_count=1 connection_count=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 346681 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7f7ba4f6c2f0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f7ba3065e60 thread_stack 0x30000 mysqld(my_print_stacktrace+0x29)[0x7f7ba3609039] mysqld(handle_fatal_signal+0x483)[0x7f7ba34cf9c3] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f7ba2220cb0] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35)[0x7f7ba188c425] /lib/x86_64-linux-gnu/libc.so.6(abort+0x17b)[0x7f7ba188fb8b] mysqld(+0x65e0fc)[0x7f7ba37160fc] mysqld(+0x602be6)[0x7f7ba36babe6] mysqld(+0x635006)[0x7f7ba36ed006] mysqld(+0x5d7072)[0x7f7ba368f072] mysqld(+0x5d7b9c)[0x7f7ba368fb9c] mysqld(+0x6a3348)[0x7f7ba375b348] mysqld(+0x6a3887)[0x7f7ba375b887] mysqld(+0x5c6a86)[0x7f7ba367ea86] mysqld(+0x5ae3a7)[0x7f7ba36663a7] mysqld(_Z15ha_delete_tableP3THDP10handlertonPKcS4_S4_b+0x16d)[0x7f7ba34d3ffd] mysqld(_Z23mysql_rm_table_no_locksP3THDP10TABLE_LISTbbbb+0x568)[0x7f7ba3417f78] mysqld(_Z11mysql_rm_dbP3THDPcbb+0x8aa)[0x7f7ba339780a] mysqld(_Z21mysql_execute_commandP3THD+0x394c)[0x7f7ba33b886c] mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x10f)[0x7f7ba33bb28f] mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1380)[0x7f7ba33bc6e0] mysqld(_Z24do_handle_one_connectionP3THD+0x1bd)[0x7f7ba346119d] mysqld(handle_one_connection+0x50)[0x7f7ba3461200] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f7ba2218e9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f7ba1949cbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f7b7c004b60): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. --Dean

    Read the article

  • Windows 7 Backup not backing up custom library?

    - by James McMahon
    I have created a custom Library under Windows 7 64bit professional to handle my source code. When I tried Windows Backup and Restore for the first time I get the following error Backup encountered a problem while backing up file C:\Windows\System32\config\systemprofile\Source. Error:(The system cannot find the file specified. (0x80070002)) I've found a thread on the error on the Microsoft answers site. But it appears to be 404 (there is a version in Google's Cache) and the thread starter never gets an answer to his issue that works. The official Microsoft answer on this is This problem is due to one or more profiles under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WindowsNT\CurrentVersion\ProfileList with missing ProfileImagePath. To check whether you have missing profiles: Open regedit, navigate to the above registry key. (HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList). Expand the list Click on each of the profiles listed. The first 3 profiles should have ProfileImagePath value of %SystemRoot%\System32\Config\SystemProfile, %SystemRoot%\ServiceProfiles\LocalService, and %SystemRoot%\ServiceProfiles\NetworkService respectively. Starting from the 4th profile, the ProfileImagePath should contain path to the user profiles on your machine, such as C:\users\Christine If one or more of the profile has no profile image, then you have missing profiles. To work around this, delete the profile in question (Caution: The registry contains critical settings that are necessary for your system to function properly. Take extra caution while making changes) First, export the ProfileList key for safekeeping. (Right click on the key, choose “Export”, and save it to the desktop.) Right click on the profile in question, choose delete. Try backup again. This does not work for me. Anyone have any idea what is going on here?

    Read the article

  • PostgreSQL lots of writes

    - by strife911
    Hi, I am using postgreSQL for a scientific application (unsupervised clustering). The python program is multi-threaded so that each thread manages its own postmaster process (one per core). Hence, their is a lot of concurrency. Each thread-process loop infinitely though two SQL queries. The first is for reading, the second is for writing. The read operation considers 500 time the amount of rows the write operation considers. Here is the output of dstat: ----total-cpu-usage---- ------memory-usage----- -dsk/total- --paging-- --io/total- usr sys idl wai hiq siq| used buff cach free| read writ| in out | read writ 4 0 32 64 0 0|3599M 63M 57G 1893M|1524k 16M| 0 0 | 98 2046 1 0 35 64 0 0|3599M 63M 57G 1892M|1204k 17M| 0 0 | 68 2062 2 0 32 66 0 0|3599M 63M 57G 1890M|1132k 17M| 0 0 | 62 2033 2 1 32 65 0 0|3599M 63M 57G 1904M|1236k 18M| 0 0 | 80 1994 2 0 31 67 0 0|3599M 63M 57G 1903M|1312k 16M| 0 0 | 70 1900 2 0 37 60 0 0|3599M 63M 57G 1899M|1116k 15M| 0 0 | 71 1594 2 1 37 60 0 0|3599M 63M 57G 1898M| 448k 17M| 0 0 | 39 2001 2 0 25 72 0 0|3599M 63M 57G 1896M|1192k 17M| 0 0 | 78 1946 1 0 40 58 0 0|3599M 63M 57G 1895M| 432k 15M| 0 0 | 38 1937 I am pretty sure I could write more often than that for I have seen it write up to 110-140M on dstat. How can I optimize this process?

    Read the article

  • Is there a setting in Exchange Server 2007 that we can set to make these headers propogate and be received by a POP/IMAP client?

    - by Ruruboy
    When using EWS Managed API to send Email via Exchange Server 2007. I noticed that MAPI clients like MS Outlook display all custom headers. But when I use POP3/IMAP clients like MS Outlook Express. I have noticed that these custom headers do not display in the message opened from MS Outlook Express. Is there a setting in Exchange Server 2007 that we can set to make these custom headers propagate and be received by a POP/IMAP client? Also why do custom headers in example below display up in lower case in MAPI clients like MS Outlook? But surprisingly if we use SMTPClient class to send email then these headers display as sent with Case Sensitive letters. eg. Header. Example of Headers received by a MAPI client like MS Outlook via Exchange Server 2007 Received: from EXMAILVS1.blabla.com ([192.168.191.136]) by cashtp02.blabla.com ([XXX.XXX.XX.XXX]) with mapi; Mon, 20 Dec 2010 12:17:05 -0800 Content-Type: application/ms-tnef; name="winmail.dat" Content-Transfer-Encoding: binary From: asfsdf <[email protected]> To: asdsdf <[email protected]> Date: Mon, 20 Dec 2010 12:17:04 -0800 Subject: Please send me this header Thread-Topic: Please send me this header Thread-Index: AQHLoILek7g5cFgHQU6lHHfiKkdUMg== Message-ID: <[email protected]> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-Exchange-Organization-SCL: -1 X-MS-TNEF-Correlator: <[email protected]> customheader1: hello ali customheader2: hello Jace MIME-Version: 1.0

    Read the article

  • Passenger error: No such file or directory - config/environment.rb

    - by JJD
    I installed Redmine on MacOSX Server 10.6.8 according to this installation description. So far everything works fine: When I start webrick the server serves the Redmine pages. The gems and redmine are installed under the user "redmine". After that I aimed configuring apache2 with passenger as described here. As suggested by the description I also installed the passenger-pane which stores its virtual host configuration files in /private/etc/apache2/passenger_pane_vhosts. This is what I came up with after a lot of manual try and error. At least, now I can reach a passenger error page. // redmine.vhost.conf <VirtualHost *:80> ServerName host ServerAlias localhost DocumentRoot "/Users/redmine/Sites/redmine" # RackEnv production # RackBaseURI / RailsEnv production RailsBaseURI / # PassengerUser www-data # PassengerGroup www-data <Directory "/Users/redmine/Sites/redmine"> Order allow,deny Allow from all </Directory> </VirtualHost> However, the passenger module still runs into the following errors. Error message: No such file or directory - config/environment.rb The /var/log/apache2/error_log of the web server stated the following. [warn] NameVirtualHost *:80 has no VirtualHosts [notice] Apache/2.2.21 (Unix) Phusion_Passenger/3.0.12 configured -- resuming normal operations [ pid=21824 thr=2151905620 file=utils.rb:176 time=2012-06-01 18:22:07.126 ]: *** Exception Errno::ENOENT in PhusionPassenger::ClassicRails::ApplicationSpawner (No such file or directory - config/environment.rb) (process 21824, thread #<Thread:0x0000010086f2a8>): I experimented with the user switch functionality of passenger as described in the documentation - as you can tell from my configuration file. Though, I was not successful.

    Read the article

  • Can't boot flash drive on GIGABYTE motherboard

    - by Deltik
    Situation When I try to boot from my flash drive, my GIGABYTE 970A-UD3 motherboard returns this: Loading Operating System ... Boot error All other motherboards I've tried support booting from that flash drive (and a backup flash drive). The operating systems I tried on both flash drives were created with usb-creator-gtk (Ubuntu USB Startup Disk Creator). I know that the motherboard understands that there is an operating system on the flash drives because when I erase them, it complains in an ALL CAPS RAGE that there isn't an operating system, which is correct. How can I boot a flash drive that's bootable from other motherboards on this motherboard? Qualification This question is not a duplicate of this one because directly writing to the flash drive as an ISO 9660 (dd if=operating_system.iso of=/dev/sdb) still does not have the motherboard recognize the operating system. This question should be a duplicate of this one because I provide more information not provided by that poster. This forum thread has broken links and does not have a solution to my problem. Nobody knows what's going on in this forum thread.

    Read the article

< Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >