Search Results

Search found 35121 results on 1405 pages for 'object cache'.

Page 636/1405 | < Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >

  • nginx caching per user agent

    - by Tuinslak
    I'm currently using nginx as reverse proxy with caching enabled. However, the main site has two different layouts, depending on the user-agent (mobile or not). I've tried something similar to this: # mobile users if ($http_user_agent ~* '(iPhone|iPod|mobile|Android|2.0\ MMP|240x320|AvantGo|BlackBerry|Blazer|Cellphone|Danger|DoCoMo|Elaine/3.0|EudoraWeb|hiptop|IEMobile)') { set $iphone_request '1'; } if ($iphone_request = '1') { proxy_cache mobile; } if ($iphone_request = '') { proxy_cache site; } proxy_cache_key "$scheme://$host$request_uri"; proxy_pass http://real-site.tld; However, nginx gives an error, stating proxy_cache can't be used in an if-structure. Any other way to serve from a different cache depending on the browser? Thanks, Tuinslak

    Read the article

  • Get Python 2.7's 'json' to not throw an exception when it encounters random byte strings

    - by Chris Dutrow
    Trying to encode a a dict object into json using Python 2.7's json (ie: import json). The object has some byte strings in it that are "pickled" data using cPickle, so for json's purposes, they are basically random byte strings. I was using django.utils's simplejson and this worked fine. But I recently switched to Python 2.7 on google app engine and they don't seem to have simplejson available anymore. Now that I am using json, it throws an exception when it encounters bytes that aren't part of UTF-8. The error that I'm getting is: UnicodeDecodeError: 'utf8' codec can't decode byte 0x80 in position 0: invalid start byte It would be nice if it printed out a string of the character codes like the debugging might do, ie: \u0002]q\u0000U\u001201. But I really don't much care how it handles this data just as long as it doesn't throw an exception and continues serializing the information that it does recognize. How can I make this happen? Thanks!

    Read the article

  • Casting objects in C# (ASP.Net MVC)

    - by Mortanis
    I'm coming from a background in ColdFusion, and finally moving onto something modern, so please bear with me. I'm running into a problem casting objects. I have two database tables that I'm using as Models - Residential and Commercial. Both of them share the majority of their fields, though each has a few unique fields. I've created another class as a container that contains the sum of all property fields. Query the Residential and Commercial, stuff it into my container, cunningly called Property. This works fine. However, I'm having problems aliasing the fields from Residential/Commercial onto Property. It's quite easy to create a method for each property: fillPropertyByResidential(Residential source) and fillPropertyByCommercial(Commercial source), and alias the variables. That also works fine, but quite obviously will copy a bunch of code - all those fields that are shared between the two main Models. So, I'd like a generic fillPropertyBySource() that takes the object, and detects if it's Residential or Commercial, fills the particular fields of each respective type, then do all the fields in common. Except, I gather in C# that variables created inside an If are only in the scope of the if, so I'm not sure how to do this. public property fillPropertyBySource(object source) { property prop = new property(); if (source is Residential) { Residential o = (Residential)source; //Fill Residential only fields } else if (source is Commercial) { Commercial o = (Commercial)source; //Fill Commercial only fields } //Fill fields shared by both prop.price = (int)o.price; prop.bathrooms = (float)o.bathrooms; return prop; } "o" being a Commercial or Residential only exists within the scope of the if. How do I detect the original type of the source object and take action? Bear with me - the shift from ColdFusion into a modern language is pretty..... difficult. More so since I'm used to procedural code and MVC is a massive paradigm shift. Edit: I should include the error: The name 'o' does not exist in the current context For the aliases of price and bathrooms in the shared area.

    Read the article

  • Mono 2.10.5 Runtime error on Ubuntu 11.10

    - by johnluetke
    I've install mono-runtime via apt in order to run my Mono console application on Ubuntu via SSH. However, when I run the command mono myapp.exe, It exits, with no message, and my program does nothing. If I throw the -v switch to Mono, such as mono -v myapp.exe, I get about 10k lines of output (as expected, -v is verbose), with the first few lines being: converting method System.OutOfMemoryException:.ctor (string) Method System.OutOfMemoryException:.ctor (string) emitted at 0xb7052c28 to 0xb7052c4b (code length 35) [myapp.exe] converting method (wrapper runtime-invoke) <Module>:runtime_invoke_void__this___object (object,intptr,intptr,intptr) Method (wrapper runtime-invoke) <Module>:runtime_invoke_void__this___object (object,intptr,intptr,intptr) emitted at 0xb7052c68 to 0xb7052cf6 (code length 142) [myapp.exe] converting method System.SystemException:.ctor (string) I read this as the runtime throwing an OutOfMemory exception, but the machine is under no intense load, has plenty of available RAM, and is running nothing other that system processes. I've removed and reinstalled Mono countless times, and have even run the executable on other machines perfectly fine. Am I missing something completely obvious here?

    Read the article

  • Weird exception with delayed_job

    - by Tam
    Trying to queue a job with delayed_job as follows: Delayed::Job.enqueue(BackgroundProcess.new(current_user, object)) current_user and object are not nil when I print them out. The weird thing is that sometimes refreshing the page or running the command again works! Here is the exception trace: Delayed::Backend::ActiveRecord::Job Columns (44.8ms) SHOW FIELDS FROM `delayed_jobs` TypeError (wrong argument type nil (expected Data)): /Users/.rvm/rubies/ruby-1.9.1-p378/lib/ruby/1.9.1/yaml.rb:391:in `emit' /Users/.rvm/rubies/ruby-1.9.1-p378/lib/ruby/1.9.1/yaml.rb:391:in `quick_emit' /Users/.rvm/rubies/ruby-1.9.1-p378/lib/ruby/1.9.1/yaml/rubytypes.rb:86:in `to_yaml' vendor/plugins/delayed_job/lib/delayed/backend/base.rb:65:in `payload_object=' activerecord (2.3.9) lib/active_record/base.rb:2918:in `block in assign_attributes' activerecord (2.3.9) lib/active_record/base.rb:2914:in `each' activerecord (2.3.9) lib/active_record/base.rb:2914:in `assign_attributes' activerecord (2.3.9) lib/active_record/base.rb:2787:in `attributes=' activerecord (2.3.9) lib/active_record/base.rb:2477:in `initialize' activerecord (2.3.9) lib/active_record/base.rb:725:in `new' activerecord (2.3.9) lib/active_record/base.rb:725:in `create' vendor/plugins/delayed_job/lib/delayed/backend/base.rb:21:in `enqueue'

    Read the article

  • template pass by const reference

    - by 7vies
    Hi, I've looked over a few similar questions, but I'm still confused. I'm trying to figure out how to explicitly (not by compiler optimization etc) and C++03-compatible avoid copying of an object when passing it to a template function. Here is my test code: #include <iostream> using namespace std; struct C { C() { cout << "C()" << endl; } C(const C&) { cout << "C(C)" << endl; } ~C() { cout << "~C()" << endl; } }; template<class T> void f(T) { cout << "f<T>" << endl; } template<> void f(C c) { cout << "f<C>" << endl; } // (1) template<> void f(const C& c) { cout << "f<C&>" << endl; } // (2) int main() { C c; f(c); return 0; } (1) accepts the object of type C, and makes a copy. Here is the output: C() C(C) f<C> ~C() ~C() So I've tried to specialize with a const C& parameter (2) to avoid this, but this simply doesn't work (apparently the reason is explained in this question). Well, I could "pass by pointer", but that's kind of ugly. So is there some trick that would allow to do that somehow nicely? EDIT: Oh, probably I wasn't clear. I already have a templated function template<class T> void f(T) {...} But now I want to specialize this function to accept a const& to another object: template<> void f(const SpecificObject&) {...} But it only gets called if I define it as template<> void f(SpecificObject) {...}

    Read the article

  • Merge two lists of objects which hold different data

    - by C.McAtackney
    I have an object "Project" which has a number of fields, one of which is "Name". I have one spreadsheet of projects which contains some of these fields, and another which contains the rest. However, Both spreadsheets have the "Name" field. I've read them in and populated two List, only populating the available fields from that particular source. E.g. a Project from List 1 looks like; {Name="MyProj", Type="Form", Priority=NULL}, whereas a Project from List 2 {Name="MyProj", Type=NULL, Priority="High"}. Now, I want to merge these two lists into one in which each Project object has all of its fields populated, with the Name field being used to match the elements. How can I achieve this? Are there any nice ways of doing this concisely? Thanks

    Read the article

  • How do I compile on linux to share with all distributions?

    - by Andrew M
    I compiled a PHP extension on Fedora Core 12, but when I send it to someone using CentOS they get the error: "ELF file OS ABI invalid" I'm not sure what causes this running file provides the following info: ELF 64-bit LSB shared object, AMD x86-64, version 1 (GNU/Linux), not stripped An extension that loads fine provides the following from file: ELF 64-bit LSB shared object, AMD x86-64, version 1 (SYSV), not stripped So it seems I need to generate a SYSV type file for some distributions, instead of a GNU/LINUX file, no idea how though. Any pointers? Also should I be statically linking?

    Read the article

  • Passing data between Castle Windsor's Interceptors

    - by Nhím H? Báo
    I'm adopting Castle Windsor for my WCF project and feel really amazed about this. However, I'm having a scenario that I don't really know if Castle Windsor supports. For example I have the following chained Interceptors Interceptor 1 > Interceptor 2 > Interceptor 3 > Interceptor 4 > Real method Interceptor 1 returns some data and I want that to be available in Interceptor 2 Interceptor 2 in turn does it work and returns the data that I want to make avaialbe in the 3,4, interceptor. The real case scenario is that we're having a WCF service, Interceptor 1 will parse the request header into a Header object(username, password, etc.). The latter interceptors and real method will ultilize this Header object. I know that I can use Session variable to transport data, but is it a built-in, more elegant, more reliable way to handle this?

    Read the article

  • Possible iphone animation timing/rendering bug?

    - by David
    Hi all, I have been working on an iphone apps for several weeks. Now I encounter an animation problem that I can't figure out how to resolve. Mayhbe you can help. Here is the details (a little long, bear with me): Basically the effect I want to achieve is, when user click a button, a loading view pops up, hiding the whole screen; and then the apps does a lot of heavy computation, which takes a few seconds. Once the computation is done, soem result views (something likes checkers on a checker board) are rendered under the loading view. Once all result views are rendered, I used animation animation to remove the loading view nand show the result views to the user. Here is what I do: when user click a button, run this code: [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:1.0]; [UIView setAnimationBeginsFromCurrentState:YES]; [UIView setAnimationTransition:UIViewAnimationTransitionCurlDown forView:self.view cache:YES]; [UIView setAnimationDelegate:self]; [UIView setAnimationDidStopSelector:@selector(loadingViewInserted:finished:context:)]; // use a really high index number so it will always on top [self.view insertSubview:loadingViewController.view atIndex:1000]; [UIView commitAnimations]; In the "loadingViewInserted" function, it calls another function doing the heavy computation work. Once the computation is done, a lot of result views (like checkers on a checker board) are rendered under the loading view. for(int colIndex = 1; colIndex <= result.columns; colIndex++) { for(int rowIndex = 1; rowIndex <= result.rows; rowIndex++) { ResultView *rv = [ResultView resultViewWithData:results[colIndex][rowIndex]]; [self.view addSubview:rv]; } } Once all result views are added, following animation is invoked to remove the loading view: [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:1.0]; [UIView setAnimationBeginsFromCurrentState:YES]; [UIView setAnimationTransition:UIViewAnimationTransitionCurlUp forView:self.view cache:YES]; [loadingViewController.view removeFromSuperview]; [UIView commitAnimations]; By doing this, most of the time (maybe 90%) it does exactly what I want. However, sometime I see some weird result: the loading view shows up first as expected, then before it disappears, some result views, which suppose to be under the loading view, suddenly appears on top of the loading view; and some of them are partial rendered. And then the loading view curled up, and everything looks normal again. The weird situation only lasts for less than a second, but already bad enough to screw up the UI. I have tried all different kinds of thing to fix this (using another thread to remove the loading view, make the loading view non-transparent), but none of them works. The only thing that makes a little better is, I hide all the result views first; after the last animation finished, in its call back, unhide all result views. But this loses the nice effect that when curling up the loading view, the results are already there. At this point, I really think this is a bug in iphone (I compile it with OS 3.0) OS. Or maybe you can point out what I have done wrong (or could do differently). (thanks for finishing this long post, :-) )

    Read the article

  • Pure virtual destructor in interface

    - by ALOR
    Hello all. Here is my problem. I'm making C++ dll, which extensively relies on instance object exports. So i return my actual instances as a pointers to interface through some exported factory method. Interfaces i use are purely virtual, to avoid linking problame. So i need a pure virtual destructor too, and i implemented one (with empty body, as i googled it). All compiles perfectly well, except... I can't see, if the actual destructors are called or not - because when i added some std::cout << "hello destructor"; i never get to see it. I have some explicit "delete obj", that's not the problem. Am i missing something? Is there another way to delete my object through interface?

    Read the article

  • Minimal backup for Windows 7 system recovery

    - by JIm
    There might not be an answer to this, but for a home Win7 system, what files/directories must be backed up to recover after a windows crash? I can reinstall software, and I keep data files elsewhere. When I use acronis home backup software to backup my "critical" files it seems to choose the entire partition. Updates are mostly browser cache files and the like. Or, after a crash, should I just reinstall windows. I dread the hours of windows updates that would require. Thanks.

    Read the article

  • Treating a fat webservice in .net 3.5 c#

    - by Chris M
    I'm dealing with an obese 3rd party webservice that returns about 3mb of data for a simple search results, about 50% of the data in that response is junk. Would it make sense then to remap this data to my own result object and ditch the response so I'm storing 1-2 mb in memory for filtering and sorting rather than using the web-responses own object and using 2-4 or am I missing a point? So far I've been accessing the webservice from a separate project and using a new class to provide the interaction and to handle the persistence so my project looks like this |- Web (mvc2 proj) |- DAL (database/storage fluent-nhibernate) |- SVCGateway (interaction layer + webservice related models) |- Services -------------- |- Tests |- Specs I'm trying to make the application behave fast and I also need to store the result set temporarily in case a customer goes to view the product and wants to go back to the results. (Service returns only 500 of possible 14K results). So basically I'm looking for confirmation that I'm doing the right thing in pushing the results into my own objects or if I'm breaking some rule or even if there's a better way of handling it. Thanks

    Read the article

  • How to make DropDownList automatically be selected based on the Label.Text

    - by Archana B.R
    I have a DropDownlist in the GridView, which should be visible only when edit is clicked. I have bound the DropDownList from code behind. When I click on Edit, the label value of that cell should automatically get selected in the DropDownList. The code I have tried is: protected void GridView3_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { SqlCommand cmd = new SqlCommand("SELECT Location_Name FROM Location_Table"); DropDownList bind_drop = (e.Row.FindControl("DropList") as DropDownList); bind_drop.DataSource = this.ExecuteQuery(cmd, "SELECT"); bind_drop.DataTextField = "Location_Name"; bind_drop.DataValueField = "Location_Name"; bind_drop.DataBind(); string Loc_type = (e.Row.FindControl("id2") as Label).Text.Trim(); bind_drop.Items.FindByValue(Loc_type).Selected = true; } } When I run the code, it gives an exception error Object reference not set in the last line of the above code. Cannot find out whats wrong. Kindly help

    Read the article

  • Why does $('#id') return true if id doesn't exist?

    - by David
    I always wondered why jQuery returns true if I'm trying to find elements by id selector that doesnt exist in the DOM structure. Like this: <div id="one">one</div> <script> console.log( !!$('#one') ) // prints true console.log( !!$('#two') ) // is also true! (empty jQuery object) console.log( !!document.getElementById('two') ) // prints false </script> I know I can use !!$('#two').length since length === 0 if the object is empty, but it seems logical to me that a selector would return the element if found, otherwise null (like the native document.getElementById does). F.ex, this logic can't be done in jQuery: var div = $('#two') || $('<div id="two"></div>'); Wouldnt it be more logical if the ID selector returned null if not found? anyone?

    Read the article

  • error string list handle in C#

    - by Chelsea_cole
    Same namespace: 2 forms. public class Account //frm1 { public string Username; public string Password; } public class ListAcc { public static List<Account> UserList; } private void button1_Click(object sender, EventArgs e) { List<Account> UserList = new List<Account>(); Account acc = new Account(); acc.Username = textBox1.Text; acc.Password = textBox2.Text; UserList.Add(acc); } private void button2_Click(object sender, EventArgs e) //frm2 { string p = frmDangky.ListAcc.UserList[0].Username; // null ->error string p = frmDangky.ListAcc.UserList[0].Password; // null ->error } Someone help me? :( why my string is NULL???????? The textBox is not empty... Thanks!

    Read the article

  • Id property not populated

    - by fingers
    I have an identity mapping like so: Id(x => x.GuidId).Column("GuidId") .GeneratedBy.GuidComb().UnsavedValue(Guid.Empty); When I retrieve an object from the database, the GuidId property of my object is Guid.Empty, not the actual Guid (the property in the class is of type System.Guid). However, all of the other properties in the object are populated just fine. The database field's data type (SQL Server 2005) is uniqueidentifier, and marked as RowGuid. The application that is connecting to the database is a VB.NET Web Site project (not a "Web Application" or "MVC Web Application" - just a regular "Web Site" project). I open the NHibernate session through a custom HttpModule. Here is the HttpModule: public class NHibernateModule : System.Web.IHttpModule { public static ISessionFactory SessionFactory; public static ISession Session; private static FluentConfiguration Configuration; static NHibernateModule() { if (Configuration == null) { string connectionString = cfg.ConfigurationManager.ConnectionStrings["myDatabase"].ConnectionString; Configuration = Fluently.Configure() .Database(MsSqlConfiguration.MsSql2005.ConnectionString(cs => cs.Is(connectionString))) .ExposeConfiguration(c => c.Properties.Add("current_session_context_class", "web")) .Mappings(x => x.FluentMappings.AddFromAssemblyOf<LeadMap>().ExportTo("C:\\Mappings")); } SessionFactory = Configuration.BuildSessionFactory(); } public void Init(HttpApplication context) { context.BeginRequest += delegate { Session = SessionFactory.OpenSession(); CurrentSessionContext.Bind(Session); }; context.EndRequest += delegate { CurrentSessionContext.Unbind(SessionFactory); }; } public void Dispose() { Session.Dispose(); } } The strangest part of all, is that from my unit test project, the GuidId property is returned as I would expect. I even rigged it to go for the exact row in the exact database as the web site was hitting. The only differences I can think of between the two projects are The unit test project is in C# Something with the way the session is managed between the HttpModule and my unit tests The configuration for the unit tests is as follows: Fluently.Configure() .Database(MsSqlConfiguration.MsSql2005.ConnectionString(cs => cs.Is(connectionString))) .Mappings(x => x.FluentMappings.AddFromAssemblyOf<LeadDetailMap>()); I am fresh out of ideas. Any help would be greatly appreciated. Thanks

    Read the article

  • Grails unit testing domain classes with Set properties - is this safe?

    - by Ali G
    I've created a domain class in Grails like this: class MyObject { static hasMany = [tags: String] // Have to declare this here, as nullable constraint does not seem to be honoured Set tags = new HashSet() static constraints = { tags(nullable: false) } } Writing unit tests to check the size and content of the MyObject.tags property, I found I had to do the following: assertLength(x, myObject.tags as Object[]) assertEquals(new HashSet([...]), myObject.tags) To make the syntax nicer for writing the tests, I implemented the following methods: void assertEquals(List expected, Set actual) { assertEquals(new HashSet(expected), actual) } void assertLength(int expected, Set set) { assertLength(expected, set as Object[]) } I can now call the assertLength() and assertEquals() methods directly on an instance of Set, e.g. assertLength(x, myObject.tags) assertEquals([...], myObject.tags) I'm new to Groovy and Grails, so unaware how dangerous method overloading like this is. Is it safe? If so, I'm slightly* surprised that these methods (or similar) aren't already available - please let me know if they are. * I can see how these methods could also introduce ambiguity if people weren't expecting them. E.g. assertLength(1, set) always passes, no matter what the content of set

    Read the article

  • How to detect crashing tabed webbrowser and handle it?

    - by David Eaton
    I have a desktop application (forms) with a tab control, I assign a tab and a new custom webrowser control. I open up about 10 of these tabs. Each one visits about 100 - 500 different pages. The trouble is that if 1 webbrowser control has a problem it shuts down the entire program. I want to be able to close the offending webbrowser control and open a new one in it's place. Is there any event that I need to subscribe to catch a crashing or unresponsive webbrowser control ? I am using C# on windows 7 (Forms), .NET framework v4 =============================================================== UPDATE: 1 - The Tabbed WebBrowser Example Here is the code I have and How I use the webbrowser control in the most basic way. Create a new forms project and name it SimpleWeb Add a new class and name it myWeb.cs, here is the code to use. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows.Forms; using System.Security.Policy; namespace SimpleWeb { //inhert all of webbrowser class myWeb : WebBrowser { public myWeb() { //no javascript errors this.ScriptErrorsSuppressed = true; //Something we want set? AssignEvents(); } //keep near the top private void AssignEvents() { //assign WebBrowser events to our custom methods Navigated += myWeb_Navigated; DocumentCompleted += myWeb_DocumentCompleted; Navigating += myWeb_Navigating; NewWindow += myWeb_NewWindow; } #region Events //List of events:http://msdn.microsoft.com/en-us/library/system.windows.forms.webbrowser_events%28v=vs.100%29.aspx //Fired when a new windows opens private void myWeb_NewWindow(object sender, System.ComponentModel.CancelEventArgs e) { //cancel all popup windows e.Cancel = true; //beep to let you know canceled new window Console.Beep(9000, 200); } //Fired before page is navigated (not sure if its before or during?) private void myWeb_Navigating(object sender, System.Windows.Forms.WebBrowserNavigatingEventArgs args) { } //Fired after page is navigated (but not loaded) private void myWeb_Navigated(object sender, System.Windows.Forms.WebBrowserNavigatedEventArgs args) { } //Fired after page is loaded (Catch 22 - Iframes could be considered a page, can fire more than once. Ads are good examples) private void myWeb_DocumentCompleted(System.Object sender, System.Windows.Forms.WebBrowserDocumentCompletedEventArgs args) { } #endregion //Answer supplied by mo. (modified)? public void OpenUrl(string url) { try { //this.OpenUrl(url); this.Navigate(url); } catch (Exception ex) { MessageBox.Show("Your App Crashed! Because = " + ex.ToString()); //MyApplication.HandleException(ex); } } //Keep near the bottom private void RemoveEvents() { //Remove Events Navigated -= myWeb_Navigated; DocumentCompleted -= myWeb_DocumentCompleted; Navigating -= myWeb_Navigating; NewWindow -= myWeb_NewWindow; } } } On Form1 drag a standard tabControl and set the dock to fill, you can go into the tab collection and delete the pre-populated tabs if you like. Right Click on Form1 and Select "View Code" and replace it with this code. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using mshtml; namespace SimpleWeb { public partial class Form1 : Form { public Form1() { InitializeComponent(); //Load Up 10 Tabs for (int i = 0; i <= 10; i++) { newTab("Test_" + i, "http://wwww.yahoo.com"); } } private void newTab(string Title, String Url) { //Create a new Tab TabPage newTab = new TabPage(); newTab.Name = Title; newTab.Text = Title; //create webbrowser Instance myWeb newWeb = new myWeb(); //Add webbrowser to new tab newTab.Controls.Add(newWeb); newWeb.Dock = DockStyle.Fill; //Add New Tab to Tab Pages tabControl1.TabPages.Add(newTab); newWeb.OpenUrl(Url); } } } Save and Run the project. Using the answer below by mo. , you can surf the first url with no problem, but what about all the urls the user clicks on? How do we check those? I prefer not to add events to every single html element on a page, there has to be a way to run the new urls thru the function OpenUrl before it navigates without having an endless loop. Thanks.

    Read the article

  • What is XcvPort (used in OpenPrinter function)

    - by RiskX
    I'm usin the OpenPrinter function and the first parameter the function gets called "pPrinterName" and that's it's MSDN description: [in] Pointer to a null-terminated string that specifies the name of the printer or print server, the printer object, the XcvMonitor, or the XcvPort. For a printer object use: PrinterName,Job xxxx. For an XcvMonitor, use: ServerName,XcvMonitor MonitorName. For an XcvPort, use: ServerName,XcvPort PortName. Obviously I'm interested in the bold part. What exactly is XcvPort? I know it seems like a question of lazy person but I really couldn't find info abou this concept. If I would like to open a printer on port ABC I should write: "\\MySrever,XcvPort ABC"? Thank you for your answers!

    Read the article

  • e2fsck extremly slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there are some problems that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg | grep sdb [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and he scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So this number of the lseeks before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if the numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. (times are in CET)

    Read the article

  • Error while coping datacolumn with data from one table to another .

    - by Harikrishna
    I have one Datatable like DataTable addressAndPhones; And there are four columns name,address,phoneno,and other details and I only want two columns Name and address from that so I do for that is DataTable addressAndPhones2; addressAndPhones2.Columns.Add(new DataColumn(addressAndPhones.Columns["name"].ColumnName)); addressAndPhones2.Columns.Add(new DataColumn(addressAndPhones.Columns["address"].ColumnName)); But it gives me error so how can I copy fix no of columns data from one table to another table ? ERROR :Object reference not set to an instance of an object.

    Read the article

  • How can I "override" deepcopy in Python?

    - by Az
    Hi there, I'd like to override __deepcopy__ for a given SQLAlchemy-mapped class such that it ignores any SQLA attributes but deepcopies everything else that's part of the class. I'm not particularly familiar with overriding any of Python's built-in objects in particular but I've got some idea as to what I want. Let's just make a very simple class User that's mapped using SQLA. class User(object): def __init__(self, user_id, name): self.user_id = user_id self.name = name I've used dir() to see, before and after mapping, what SQLAlchemy-specific attributes there are and I've found _sa_class_manager and _sa_instance_state. Provided those are the only ones how would I ignore that when defining __deepcopy__? Also, are there any attributes the SQLA injects into the mapped object? (I asked this in a previous question (as an edit a few days after I selected an answer to the main question, though) but I think I missed the train there. Apologies for that.)

    Read the article

  • Coldfusion CFC creation taking a variable amout of time to execute.

    - by Bazza
    I've been doing some logging of object creation times in our open account process in production. Periodically, initializing an object would take way longer than expected. By initializing I mean calling it's init() and passing a couple of arguments that may be simple variables or objects. e.g. <cfset validateObj = createObject("component", "compExample").init( productionMode = VARIABLES.productionMode, ipWhiteListed = isWhiteListed, ipLocatorObj = VARIABLES.ipLocatorObj ) /> Thats all that happens in init() methods. Generally the execution time would be 0ms, but at random times I might get 3 or 3.5 seconds. This is not specific to one particular server or to our generally busy period. It appears to be quite random. One thought was that these templates were being evicted from our template cache as they are not especially frequently used, although I checked cfstat on a number of servers and the max CP/Sec is -1. Running CF 8,0,1 Has anybody else ever come across this?

    Read the article

  • OpenLayers: Get resolution of map in a given projection (4326)

    - by David Pfeffer
    I'm using OpenLayers to display OpenStreetMap maps. (Though, I'd assume this should be general enough to work for any map product...) I'm displaying some very sophisticated vector overlays, and the amount and resolution of the features I'm returning from the server via GeoJSON to overlay has proven too much for many computers. What I'd like to do now instead is to only send data befitting the resolution of the current zoom. This should be relatively easy to do using the getResolution and calculateBounds methods on the Map object. calculateBounds returns a Bounds object that then can be transformed between the map projection and display projection. How do I transform the resolution into my desired projection (4326 in this case)?

    Read the article

< Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >