Search Results

Search found 62712 results on 2509 pages for 'memory error'.

Page 559/2509 | < Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >

  • My linux server takes more than an hour to boot. Suggestions?

    - by jamieb
    I am building a CentOS 5.4 system that boots off a compact flash card using a card reader that emulates an IDE drive. It literally takes about an hour to boot. The ultra-slow part occurs when Grub is loading the kernel. Once that's done, the rest of the boot process only takes about a minute to get to a login prompt. Does anyone have any suggestions? I suspect that it may have to do with UDMA. Everything IDE-related in my BIOS seems to checkout. The read performance hdparm is telling me 1.77 MB/s. Ouch! (But even at that rate, it still shouldn't take an hour to decompress and load the kernel) [root@server ~]# hdparm -tT /dev/hdc /dev/hdc: Timing cached reads: 2444 MB in 2.00 seconds = 1222.04 MB/sec Timing buffered disk reads: 6 MB in 3.39 seconds = 1.77 MB/sec Trying to enable DMA is a no-go though: [root@server ~]# hdparm -d1 /dev/hdc /dev/hdc: setting using_dma to 1 (on) HDIO_SET_DMA failed: Operation not permitted using_dma = 0 (off) Here's some command outputs that might help: System [root@server ~]# uname -a Linux server.localdomain 2.6.18-164.el5xen #1 SMP Thu Sep 3 04:47:32 EDT 2009 i686 i686 i386 GNU/Linux PCI info: [root@server ~]# lspci -v 00:00.0 Host bridge: Intel Corporation 82945G/GZ/P/PL Memory Controller Hub (rev 02) Subsystem: Intel Corporation 82945G/GZ/P/PL Memory Controller Hub Flags: bus master, fast devsel, latency 0 Capabilities: [e0] Vendor Specific Information 00:02.0 VGA compatible controller: Intel Corporation 82945G/GZ Integrated Graphics Controller (rev 02) (prog-if 00 [VGA controller]) Subsystem: Intel Corporation 82945G/GZ Integrated Graphics Controller Flags: bus master, fast devsel, latency 0, IRQ 10 Memory at fdf00000 (32-bit, non-prefetchable) [size=512K] I/O ports at ff00 [size=8] Memory at d0000000 (32-bit, prefetchable) [size=256M] Memory at fdf80000 (32-bit, non-prefetchable) [size=256K] Capabilities: [90] Message Signalled Interrupts: 64bit- Queue=0/0 Enable- Capabilities: [d0] Power Management version 2 00:1d.0 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #1 (rev 01) (prog-if 00 [UHCI]) Subsystem: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #1 Flags: bus master, medium devsel, latency 0, IRQ 16 I/O ports at fe00 [size=32] 00:1d.1 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #2 (rev 01) (prog-if 00 [UHCI]) Subsystem: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #2 Flags: bus master, medium devsel, latency 0, IRQ 17 I/O ports at fd00 [size=32] 00:1d.2 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #3 (rev 01) (prog-if 00 [UHCI]) Subsystem: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #3 Flags: bus master, medium devsel, latency 0, IRQ 18 I/O ports at fc00 [size=32] 00:1d.3 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #4 (rev 01) (prog-if 00 [UHCI]) Subsystem: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #4 Flags: bus master, medium devsel, latency 0, IRQ 19 I/O ports at fb00 [size=32] 00:1d.7 USB Controller: Intel Corporation 82801G (ICH7 Family) USB2 EHCI Controller (rev 01) (prog-if 20 [EHCI]) Subsystem: Intel Corporation 82801G (ICH7 Family) USB2 EHCI Controller Flags: bus master, medium devsel, latency 0, IRQ 16 Memory at fdfff000 (32-bit, non-prefetchable) [size=1K] Capabilities: [50] Power Management version 2 Capabilities: [58] Debug port 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1) (prog-if 01 [Subtractive decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=01, subordinate=01, sec-latency=32 I/O behind bridge: 0000d000-0000dfff Memory behind bridge: fde00000-fdefffff Prefetchable memory behind bridge: 00000000fdd00000-00000000fdd00000 Capabilities: [50] #0d [0000] 00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) Subsystem: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge Flags: bus master, medium devsel, latency 0 Capabilities: [e0] Vendor Specific Information 00:1f.2 IDE interface: Intel Corporation 82801GB/GR/GH (ICH7 Family) SATA IDE Controller (rev 01) (prog-if 80 [Master]) Subsystem: Intel Corporation 82801GB/GR/GH (ICH7 Family) SATA IDE Controller Flags: bus master, 66MHz, medium devsel, latency 0, IRQ 17 I/O ports at <unassigned> I/O ports at <unassigned> I/O ports at <unassigned> I/O ports at <unassigned> I/O ports at f800 [size=16] Capabilities: [70] Power Management version 2 00:1f.3 SMBus: Intel Corporation 82801G (ICH7 Family) SMBus Controller (rev 01) Subsystem: Intel Corporation 82801G (ICH7 Family) SMBus Controller Flags: medium devsel, IRQ 17 I/O ports at 0500 [size=32] 01:04.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) Subsystem: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ Flags: bus master, medium devsel, latency 32, IRQ 18 I/O ports at de00 [size=256] Memory at fdeff000 (32-bit, non-prefetchable) [size=256] Capabilities: [50] Power Management version 2 01:06.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) Subsystem: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ Flags: bus master, medium devsel, latency 32, IRQ 17 I/O ports at dc00 [size=256] Memory at fdefe000 (32-bit, non-prefetchable) [size=256] Capabilities: [50] Power Management version 2 01:07.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) Subsystem: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ Flags: bus master, medium devsel, latency 32, IRQ 19 I/O ports at da00 [size=256] Memory at fdefd000 (32-bit, non-prefetchable) [size=256] Capabilities: [50] Power Management version 2 hdparm ouput: [root@server ~]# hdparm /dev/hdc /dev/hdc: multcount = 0 (off) IO_support = 0 (default 16-bit) unmaskirq = 0 (off) using_dma = 0 (off) keepsettings = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 8146/16/63, sectors = 8211168, start = 0 [root@server ~]# hdparm -I /dev/hdc /dev/hdc: ATA device, with non-removable media Model Number: InnoDisk Corp. - iCF4000 4GB Serial Number: 20091023AACA70000753 Firmware Revision: 081107 Standards: Supported: 5 Likely used: 6 Configuration: Logical max current cylinders 8146 8146 heads 16 16 sectors/track 63 63 -- CHS current addressable sectors: 8211168 LBA user addressable sectors: 8211168 device size with M = 1024*1024: 4009 MBytes device size with M = 1000*1000: 4204 MBytes (4 GB) Capabilities: LBA, IORDY(can be disabled) Standby timer values: spec'd by Vendor R/W multiple sector transfer: Max = 2 Current = 2 DMA: mdma0 mdma1 mdma2 udma0 udma1 *udma2 udma3 udma4 Cycle time: min=120ns recommended=120ns PIO: pio0 pio1 pio2 pio3 pio4 Cycle time: no flow control=120ns IORDY flow control=120ns Commands/features: Enabled Supported: * Power Management feature set * WRITE_BUFFER command * READ_BUFFER command * NOP cmd * CFA feature set * Mandatory FLUSH_CACHE HW reset results: CBLID- above Vih Device num = 0 CFA power mode 1: enabled and required by some commands Maximum current = 100ma Checksum: correct

    Read the article

  • Looking for Kiosk-style / camera store easy photo memory card to CD/DVD burning program for Windows-7 Notebook? For non techie user.

    - by Rob
    I'm looking for a Kiosk-style / camera shop easy photo memory card to CD/DVD burning program? For non technie user. The kind of system you see in a camera shop / store, e.g. in the UK, Jessops and Boots stores. This is for my Dad who is adept at general PC usage as a notebook owner, but would prefer something fairly simple. The task of burning photos to CD/DVD, in their original photo file .jpg form, i.e. NOT as CD or DVD video or slideshow, is what I'm looking for. I'm guessing this might be possible in Picasa, but all the options available might be superfluous and confusing. He could probably learn to use that but thought I would try simpler options first. Looking for something that guides the user through the steps/stages of the process, 'Wizard' style. Any suggestions? Platform: HP Windows 7 Home notebook with CD/DVD burner and SD memory card slot.

    Read the article

  • How do you re-mount an ext3 fs readwrite after it gets mounted readonly from a disk error?

    - by cagenut
    Its a relatively common problem when something goes wrong in a SAN for ext3 to detect the disk write errors and remount the filesystem read-only. Thats all well and good, only when the SAN is fixed I can't figure out how to re-re-mount the filesystem read-write without rebooting. Behold: [root@localhost ~]# multipath -ll mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=2][active] \_ 1:0:0:1 sdb 8:16 [active][ready] \_ 2:0:0:1 sdc 8:32 [active][ready] [root@localhost ~]# mount /dev/mapper/mpath0 /mnt/foo [root@localhost ~]# touch /mnt/foo/blah All good, now I yank the LUN out from under it. [root@localhost ~]# touch /mnt/foo/blah [root@localhost ~]# touch /mnt/foo/blah touch: cannot touch `/mnt/foo/blah': Read-only file system [root@localhost ~]# tail /var/log/messages Mar 18 13:17:33 localhost multipathd: sdb: tur checker reports path is down Mar 18 13:17:34 localhost multipathd: sdc: tur checker reports path is down Mar 18 13:17:35 localhost kernel: Aborting journal on device dm-2. Mar 18 13:17:35 localhost kernel: Buffer I/O error on device dm-2, logical block 1545 Mar 18 13:17:35 localhost kernel: lost page write due to I/O error on dm-2 Mar 18 13:17:36 localhost kernel: ext3_abort called. Mar 18 13:17:36 localhost kernel: EXT3-fs error (device dm-2): ext3_journal_start_sb: Detected aborted journal Mar 18 13:17:36 localhost kernel: Remounting filesystem read-only It only thinks its read-only, in reality its not even there. [root@localhost ~]# multipath -ll sdb: checker msg is "tur checker reports path is down" sdc: checker msg is "tur checker reports path is down" mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=0][hwhandler=0][rw] \_ round-robin 0 [prio=0][enabled] \_ 1:0:0:1 sdb 8:16 [failed][faulty] \_ 2:0:0:1 sdc 8:32 [failed][faulty] [root@localhost ~]# ll /mnt/foo/ ls: reading directory /mnt/foo/: Input/output error total 20 -rw-r--r-- 1 root root 0 Mar 18 13:11 bar How it still remembers that 'bar' file being there... mystery, but not important right now. Now I re-present the LUN: [root@localhost ~]# tail /var/log/messages Mar 18 13:23:58 localhost multipathd: sdb: tur checker reports path is up Mar 18 13:23:58 localhost multipathd: 8:16: reinstated Mar 18 13:23:58 localhost multipathd: mpath0: queue_if_no_path enabled Mar 18 13:23:58 localhost multipathd: mpath0: Recovered to normal mode Mar 18 13:23:58 localhost multipathd: mpath0: remaining active paths: 1 Mar 18 13:23:58 localhost multipathd: dm-2: add map (uevent) Mar 18 13:23:58 localhost multipathd: dm-2: devmap already registered Mar 18 13:23:59 localhost multipathd: sdc: tur checker reports path is up Mar 18 13:23:59 localhost multipathd: 8:32: reinstated Mar 18 13:23:59 localhost multipathd: mpath0: remaining active paths: 2 Mar 18 13:23:59 localhost multipathd: dm-2: add map (uevent) Mar 18 13:23:59 localhost multipathd: dm-2: devmap already registered [root@localhost ~]# multipath -ll mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=2][enabled] \_ 1:0:0:1 sdb 8:16 [active][ready] \_ 2:0:0:1 sdc 8:32 [active][ready] Great right? It says [rw] right there. Not so fast: [root@localhost ~]# touch /mnt/foo/blah touch: cannot touch `/mnt/foo/blah': Read-only file system OK, doesn't do it automatically, I'll just give it a little push: [root@localhost ~]# mount -o remount /mnt/foo mount: block device /dev/mapper/mpath0 is write-protected, mounting read-only The hell you are: [root@localhost ~]# mount -o remount,rw /mnt/foo mount: block device /dev/mapper/mpath0 is write-protected, mounting read-only Noooooooooo. I have tried all sorts of different mount/tune2fs/dmsetup commands and I cannot figure out how to get it to un-flag the block device as write-protected. Rebooting will fix it, but I'd much rather do it on-line. An hour of googling has gotten me nowhere either. Save me ServerFault.

    Read the article

  • VHDL - Problem with std_logic_vector

    - by wretrOvian
    Hi, i'm coding a 4-bit binary adder with accumulator: library ieee; use ieee.std_logic_1164.all; entity binadder is port(n,clk,sh:in bit; x,y:inout std_logic_vector(3 downto 0); co:inout bit; done:out bit); end binadder; architecture binadder of binadder is signal state: integer range 0 to 3; signal sum,cin:bit; begin sum<= (x(0) xor y(0)) xor cin; co<= (x(0) and y(0)) or (y(0) and cin) or (x(0) and cin); process begin wait until clk='0'; case state is when 0=> if(n='1') then state<=1; end if; when 1|2|3=> if(sh='1') then x<= sum & x(3 downto 1); y<= y(0) & y(3 downto 1); cin<=co; end if; if(state=3) then state<=0; end if; end case; end process; done<='1' when state=3 else '0'; end binadder; The output : -- Compiling architecture binadder of binadder ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(15): No feasible entries for infix operator "xor". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(15): Type error resolving infix expression "xor" as type std.standard.bit. ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): No feasible entries for infix operator "and". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): Bad expression in right operand of infix expression "or". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): No feasible entries for infix operator "and". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): Bad expression in left operand of infix expression "or". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): Bad expression in right operand of infix expression "or". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): Type error resolving infix expression "or" as type std.standard.bit. ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(28): No feasible entries for infix operator "&". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(28): Type error resolving infix expression "&" as type ieee.std_logic_1164.std_logic_vector. ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(39): VHDL Compiler exiting I believe i'm not handling std_logic_vector's correctly. Please tell me how? :(

    Read the article

  • Getting TypeError: Error #1009: Cannot access a property or method of a null object reference.

    - by nemade-vipin
    hello friends, I have created small application for login in flex desktop application. In which I am refering webservice method for login for this have created the Authentication class. Now I want to refer different Textinput value for mobile no and Textinput value for password. In my Authentication class. for this I have created the object of mxml class.And using this I am getting the mobile no value and password value in My Action script class. This my code :- SBTS.mxml file xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute" public function login():void { var User:Authentication; User = new Authentication(); User.authentication(); } ]] text=" Select Child."/ Action script class :- package src { import adobe.utils.XMLUI; import generated.webservices.*; import mx.collections.ArrayCollection; import mx.controls.Alert; import mx.rpc.events.FaultEvent; public class Authentication { [ Bindable] private var childName:ArrayCollection; [ Bindable] private var childId:ArrayCollection; private var photoFeed:ArrayCollection; private var arrayOfchild:Array; private var newEntry:GetSBTSMobileAuthentication; public var user:SBTSWebService; public var mxmlobj:SBTS; public function authentication():void { user = new SBTSWebService(); if(user!=null) { user.addSBTSWebServiceFaultEventListener(handleFaults); user.addgetSBTSMobileAuthenticationEventListener(authenticationResult); newEntry = new GetSBTSMobileAuthentication(); if(newEntry!=null) { mxmlobj = new SBTS(); if(mxmlobj != null) { newEntry.mobile = mxmlobj.mobileno.text; // Getting error here error mention below newEntry.password= mxmlobj.password.text; } user.getSBTSMobileAuthentication(newEntry); } } } public function handleFaults(event:FaultEvent):void { Alert.show( "A fault occured contacting the server. Fault message is: " + event.fault.faultString); } public function authenticationResult(event:GetSBTSMobileAuthenticationResultEvent):void { if(event.result != null && event.result._return0) { if(event.result._return 0) { var UserId:int = event.result._return; if(mxmlobj != null) { mxmlobj.loginform.enabled = false; mxmlobj.viewstack2.selectedIndex=1; } } else { Alert.show( "Authentication fail"); } } } } } I am getting this error :- TypeError: Error #1009: Cannot access a property or method of a null object reference. at SBTSBusineesObject::Authentication/authentication()[E:\Users\User1\Documents\Fl ex Builder 3\SBTS\src\SBTSBusineesObject\Authentication.as:35] at SBTS/login()[E:\Users\User1\Documents\Flex Builder 3\SBTS\src\SBTS.mxml:12] at SBTS/___SBTS_Button1_click()[E:\Users\User1\Documents\Flex Builder 3\SBTS\src\SBTS.mxml:27] please help me to remove this error.

    Read the article

  • What could cause this difference in behaviour from iphone OS3.0 to iOS4.0?

    - by frankodwyer
    I am getting a strange EXC_BAD_ACCESS error when running my app on iOS4. The app has been pretty solid on OS3.x for some time - not even seeing crash logs in this area of the code (or many at all) in the wild. I've tracked the error down to this code: main class: - (void) sendPost:(PostRequest*)request { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSURLResponse* response; NSError* error; NSData *serverReply = [NSURLConnection sendSynchronousRequest:request.request returningResponse:&response error:&error]; ServerResponse* serverResponse=[[ServerResponse alloc] initWithResponse:response error:error data:serverReply]; [request.objectToNotifyWhenDone performSelectorOnMainThread:request.targetToNotifyWhenDone withObject:serverResponse waitUntilDone:YES]; [pool drain]; } (Note: sendPost is run on a separate thread for each invocation of it. PostRequest is just a class to encapsulate a request and a selector to notify when complete) ServerResponse.m: @synthesize response; @synthesize replyString; @synthesize error; @synthesize plist; - (ServerResponse*) initWithResponse:(NSURLResponse*)resp error:(NSError*)err data:(NSData*)serverReply { self.response=resp; self.error=err; self.plist=nil; self.replyString=nil; if (serverReply) { self.replyString = [[[NSString alloc] initWithBytes:[serverReply bytes] length:[serverReply length] encoding: NSASCIIStringEncoding] autorelease]; NSPropertyListFormat format; NSString *errorStr; plist = [NSPropertyListSerialization propertyListFromData:serverReply mutabilityOption:NSPropertyListImmutable format:&format errorDescription:&errorStr]; } return self; } ServerResponse.h: @property (nonatomic, retain) NSURLResponse* response; @property (nonatomic, retain) NSString* replyString; @property (nonatomic, retain) NSError* error; @property (nonatomic, retain) NSDictionary* plist; - (ServerResponse*) initWithResponse:(NSURLResponse*)response error:(NSError*)error data:(NSData*)serverReply; This reliably crashes with a bad access in the line: self.error=err; ...i.e. in the synthesized property setter! I'm stumped as to why this should be, given the code worked on the previous OS and hasn't changed since (even the binary compiled with the previous SDK crashes the same way, but not on OS3.0) - and given it is a simple property method. Any ideas? Could the NSError implementation have changed between releases or am I missing something obvious?

    Read the article

  • Windows Azure Evolution &ndash; Caching (Preview)

    - by Shaun
    Caching is a popular topic when we are building a high performance and high scalable system not only on top of the cloud platform but the on-premise environment as well. On March 2011 the Windows Azure AppFabric Caching had been production launched. It provides an in-memory, distributed caching service over the cloud. And now, in this June 2012 update, the cache team announce a grand new caching solution on Windows Azure, which is called Windows Azure Caching (Preview). And the original Windows Azure AppFabric Caching was renamed to Windows Azure Shared Caching.   What’s Caching (Preview) If you had been using the Shared Caching you should know that it is constructed by a bunch of cache servers. And when you want to use you should firstly create a cache account from the developer portal and specify the size you want to use, which means how much memory you can use to store your data that wanted to be cached. Then you can add, get and remove them through your code through the cache URL. The Shared Caching is a multi-tenancy system which host all cached items across all users. So you don’t know which server your data was located. This caching mode works well and can take most of the cases. But it has some problems. The first one is the performance. Since the Shared Caching is a multi-tenancy system, which means all cache operations should go through the Shared Caching gateway and then routed to the server which have the data your are looking for. Even though there are some caches in the Shared Caching system it also takes time from your cloud services to the cache service. Secondary, the Shared Caching service works as a block box to the developer. The only thing we know is my cache endpoint, and that’s all. Someone may satisfied since they don’t want to care about anything underlying. But if you need to know more and want more control that’s impossible in the Shared Caching. The last problem would be the price and cost-efficiency. You pay the bill based on how much cache you requested per month. But when we host a web role or worker role, it seldom consumes all of the memory and CPU in the virtual machine (service instance). If using Shared Caching we have to pay for the cache service while waste of some of our memory and CPU locally. Since the issues above Microsoft offered a new caching mode over to us, which is the Caching (Preview). Instead of having a separated cache service, the Caching (Preview) leverage the memory and CPU in our cloud services (web role and worker role) as the cache clusters. Hence the Caching (Preview) runs on the virtual machines which hosted or near our cloud applications. Without any gateway and routing, since it located in the same data center and same racks, it provides really high performance than the Shared Caching. The Caching (Preview) works side-by-side to our application, initialized and worked as a Windows Service running in the virtual machines invoked by the startup tasks from our roles, we could get more information and control to them. And since the Caching (Preview) utilizes the memory and CPU from our existing cloud services, so it’s free. What we need to pay is the original computing price. And the resource on each machines could be used more efficiently.   Enable Caching (Preview) It’s very simple to enable the Caching (Preview) in a cloud service. Let’s create a new windows azure cloud project from Visual Studio and added an ASP.NET Web Role. Then open the role setting and select the Caching page. This is where we enable and configure the Caching (Preview) on a role. To enable the Caching (Preview) just open the “Enable Caching (Preview Release)” check box. And then we need to specify which mode of the caching clusters we want to use. There are two kinds of caching mode, co-located and dedicate. The co-located mode means we use the memory in the instances we run our cloud services (web role or worker role). By using this mode we must specify how many percentage of the memory will be used as the cache. The default value is 30%. So make sure it will not affect the role business execution. The dedicate mode will use all memory in the virtual machine as the cache. In fact it will reserve some for operation system, azure hosting etc.. But it will try to use as much as the available memory to be the cache. As you can see, the Caching (Preview) was defined based on roles, which means all instances of this role will apply the same setting and play as a whole cache pool, and you can consume it by specifying the name of the role, which I will demonstrate later. And in a windows azure project we can have more than one role have the Caching (Preview) enabled. Then we will have more caches. For example, let’s say I have a web role and worker role. The web role I specified 30% co-located caching and the worker role I specified dedicated caching. If I have 3 instances of my web role and 2 instances of my worker role, then I will have two caches. As the figure above, cache 1 was contributed by three web role instances while cache 2 was contributed by 2 worker role instances. Then we can add items into cache 1 and retrieve it from web role code and worker role code. But the items stored in cache 1 cannot be retrieved from cache 2 since they are isolated. Back to our Visual Studio we specify 30% of co-located cache and use the local storage emulator to store the cache cluster runtime status. Then at the bottom we can specify the named caches. Now we just use the default one. Now we had enabled the Caching (Preview) in our web role settings. Next, let’s have a look on how to consume our cache.   Consume Caching (Preview) The Caching (Preview) can only be consumed by the roles in the same cloud services. As I mentioned earlier, a cache contributed by web role can be connected from a worker role if they are in the same cloud service. But you cannot consume a Caching (Preview) from other cloud services. This is different from the Shared Caching. The Shared Caching is opened to all services if it has the connection URL and authentication token. To consume the Caching (Preview) we need to add some references into our project as well as some configuration in the Web.config. NuGet makes our life easy. Right click on our web role project and select “Manage NuGet packages”, and then search the package named “WindowsAzure.Caching”. In the package list install the “Windows Azure Caching Preview”. It will download all necessary references from the NuGet repository and update our Web.config as well. Open the Web.config of our web role and find the “dataCacheClients” node. Under this node we can specify the cache clients we are going to use. For each cache client it will use the role name to identity and find the cache. Since we only have this web role with the Caching (Preview) enabled so I pasted the current role name in the configuration. Then, in the default page I will add some code to show how to use the cache. I will have a textbox on the page where user can input his or her name, then press a button to generate the email address for him/her. And in backend code I will check if this name had been added in cache. If yes I will return the email back immediately. Otherwise, I will sleep the tread for 2 seconds to simulate the latency, then add it into cache and return back to the page. 1: protected void btnGenerate_Click(object sender, EventArgs e) 2: { 3: // check if name is specified 4: var name = txtName.Text; 5: if (string.IsNullOrWhiteSpace(name)) 6: { 7: lblResult.Text = "Error. Please specify name."; 8: return; 9: } 10:  11: bool cached; 12: var sw = new Stopwatch(); 13: sw.Start(); 14:  15: // create the cache factory and cache 16: var factory = new DataCacheFactory(); 17: var cache = factory.GetDefaultCache(); 18:  19: // check if the name specified is in cache 20: var email = cache.Get(name) as string; 21: if (email != null) 22: { 23: cached = true; 24: sw.Stop(); 25: } 26: else 27: { 28: cached = false; 29: // simulate the letancy 30: Thread.Sleep(2000); 31: email = string.Format("{0}@igt.com", name); 32: // add to cache 33: cache.Add(name, email); 34: } 35:  36: sw.Stop(); 37: lblResult.Text = string.Format( 38: "Cached = {0}. Duration: {1}s. {2} => {3}", 39: cached, sw.Elapsed.TotalSeconds.ToString("0.00"), name, email); 40: } The Caching (Preview) can be used on the local emulator so we just F5. The first time I entered my name it will take about 2 seconds to get the email back to me since it was not in the cache. But if we re-enter my name it will be back at once from the cache. Since the Caching (Preview) is distributed across all instances of the role, so we can scaling-out it by scaling-out our web role. Just use 2 instances and tweak some code to show the current instance ID in the page, and have another try. Then we can see the cache can be retrieved even though it was added by another instance.   Consume Caching (Preview) Across Roles As I mentioned, the Caching (Preview) can be consumed by all other roles within the same cloud service. For example, let’s add another web role in our cloud solution and add the same code in its default page. In the Web.config we add the cache client to one enabled in the last role, by specifying its role name here. Then we start the solution locally and go to web role 1, specify the name and let it generate the email to us. Since there’s no cache for this name so it will take about 2 seconds but will save the email into cache. And then we go to web role 2 and specify the same name. Then you can see it retrieve the email saved by the web role 1 and returned back very quickly. Finally then we can upload our application to Windows Azure and test again. Make sure you had changed the cache cluster status storage account to the real azure account.   More Awesome Features As a in-memory distributed caching solution, the Caching (Preview) has some fancy features I would like to highlight here. The first one is the high availability support. This is the first time I have heard that a distributed cache support high availability. In the distributed cache world if a cache cluster was failed, the data it stored will be lost. This behavior was introduced by Memcached and is followed by almost all distributed cache productions. But Caching (Preview) provides high availability, which means you can specify if the named cache will be backup automatically. If yes then the data belongs to this named cache will be replicated on another role instance of this role. Then if one of the instance was failed the data can be retrieved from its backup instance. To enable the backup just open the Caching page in Visual Studio. In the named cache you want to enable backup, change the Backup Copies value from 0 to 1. The value of Backup Copies only for 0 and 1. “0” means no backup and no high availability while “1” means enabled high availability with backup the data into another instance. But by using the high availability feature there are something we need to make sure. Firstly the high availability does NOT means the data in cache will never be lost for any kind of failure. For example, if we have a role with cache enabled that has 10 instances, and 9 of them was failed, then most of the cached data will be lost since the primary and backup instance may failed together. But normally is will not be happened since MS guarantees that it will use the instance in the different fault domain for backup cache. Another one is that, enabling the backup means you store two copies of your data. For example if you think 100MB memory is OK for cache, but you need at least 200MB if you enabled backup. Besides the high availability, the Caching (Preview) support more features introduced in Windows Server AppFabric Caching than the Windows Azure Shared Caching. It supports local cache with notification. It also support absolute and slide window expiration types as well. And the Caching (Preview) also support the Memcached protocol as well. This means if you have an application based on Memcached, you can use Caching (Preview) without any code changes. What you need to do is to change the configuration of how you connect to the cache. Similar as the Windows Azure Shared Caching, MS also offers the out-of-box ASP.NET session provider and output cache provide on top of the Caching (Preview).   Summary Caching is very important component when we building a cloud-based application. In the June 2012 update MS provides a new cache solution named Caching (Preview). Different from the existing Windows Azure Shared Caching, Caching (Preview) runs the cache cluster within the role instances we have deployed to the cloud. It gives more control, more performance and more cost-effect. So now we have two caching solutions in Windows Azure, the Shared Caching and Caching (Preview). If you need a central cache service which can be used by many cloud services and web sites, then you have to use the Shared Caching. But if you only need a fast, near distributed cache, then you’d better use Caching (Preview).   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • apt-get install was interrupted

    - by user3475299
    I am new to Ubuntu. I got the following lines after an interrupted apt-get install. Running depmod. update-initramfs: deferring update (hook will be called later) Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 3.13.0-29-generic /boot/vmlinuz-3.13.0-29-generic run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.13.0-29-generic /boot/vmlinuz-3.13.0-29-generic update-initramfs: Generating /boot/initrd.img-3.13.0-29-generic run-parts: executing /etc/kernel/postinst.d/pm-utils 3.13.0-29-generic /boot/vmlinuz-3.13.0-29-generic run-parts: executing /etc/kernel/postinst.d/update-notifier 3.13.0-29-generic /boot/vmlinuz-3.13.0-29-generic run-parts: executing /etc/kernel/postinst.d/zz-update-grub 3.13.0-29-generic /boot/vmlinuz-3.13.0-29-generic /usr/sbin/grub-mkconfig: 14: /etc/default/grub: nouveau.modeset=0: not found run-parts: /etc/kernel/postinst.d/zz-update-grub exited with return code 127 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.13.0-29-generic.postinst line 1025. No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already dpkg: error processing package linux-image-3.13.0-29-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-extra-3.13.0-29-generic: linux-image-extra-3.13.0-29-generic depends on linux-image-3.13.0-29-generic; however: Package linux-image-3.13.0-29-generic is not configured yet. dpkg: error processing package linux-image-extra-3.13.0-29-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.13.0-29-generic; however: Package linux-image-3.13.0-29-generic is not configured yet. linux-image-generic depends on linux-image-extra-3.13.0-29-generic; however: Package linux-image-extra-3.13.0-29-generic is not configured yet. dpkg: error processing package linux-image-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.13.0.29.35); however: Package linux-image-generic is not configured yet. dpkg: error processing package linux-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-image-extra-3.13.0-27-generic: linux-image-extra-3.13.0-27-generic depends on linux-image-3.13.0-27-generic; however: Package linux-image-3.13.0-27-generic is not configured yet. dpkg: error processing package linux-image-extra-3.13.0-27-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-signed-image-3.13.0-29-generic: linux-signed-image-3.13.0-29-generic depends on linux-image-3.13.0-29-generic (= 3.13.0-29.53); however: Package linux-image-3.13.0-29-generic is not configured yet. linux-signed-image-3.13.0-29-generic depends on linux-image-extra-3.13.0-29-generic (= 3.13.0-29.53); however: Package linux-image-extra-3.13.0-29-generic is not configured yet. dpkg: error processing package linux-signed-image-3.13.0-29-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-signed-image-generic: linux-signed-image-generic depends on linux-signed-image-3.13.0-29-generic; however: Package linux-signed-image-3.13.0-29-generic is not configured yet. dpkg: error processing package linux-signed-image-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-signed-generic: linux-signed-generic depends on linux-signed-image-generic (= 3.13.0.29.35); however: Package linux-signed-image-generic is not configured yet. dpkg: error processing package linux-signed-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-signed-image-3.13.0-27-generic: linux-signed-image-3.13.0-27-generic depends on linux-image-3.13.0-27-generic (= 3.13.0-27.50); however: Package linux-image-3.13.0-27-generic is not configured yet. linux-signed-image-3.13.0-27-generic depends on linux-image-extra-3.13.0-27-generic (= 3.13.0-27.50); however: Package linux-image-extra-3.13.0-27-generic is not configured yet. dpkg: error processing package linux-signed-image-3.13.0-27-generic (--configure): dependency problems - leaving unconfigured Setting up libxkbcommon-x11-0:amd64 (0.4.1-0ubuntu1) ... Setting up libqt5gui5:amd64 (5.2.1+dfsg-1ubuntu14.2) ... Processing triggers for libc-bin (2.19-0ubuntu6) ... Errors were encountered while processing: linux-image-3.13.0-27-generic linux-image-3.13.0-29-generic linux-image-extra-3.13.0-29-generic linux-image-generic linux-generic linux-image-extra-3.13.0-27-generic linux-signed-image-3.13.0-29-generic linux-signed-image-generic linux-signed-generic linux-signed-image-3.13.0-27-generic E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • My error with upgrading 4.0 to 4.2- What NOT to do...

    - by Steve Tunstall
    Last week, I was helping a client upgrade from the 2011.1.4.0 code to the newest 2011.1.4.2 code. We downloaded the 4.2 update from MOS, upload and unpacked it on both controllers, and upgraded one of the controllers in the cluster with no issues at all. As this was a brand-new system with no networking or pools made on it yet, there were not any resources to fail back and forth between the controllers. Each controller had it's own, private, management interface (igb0 and igb1) and that's it. So we took controller 1 as the passive controller and upgraded it first. The first controller came back up with no issues and was now on the 4.2 code. Great. We then did a takeover on controller 1, making it the active head (although there were no resources for it to take), and then proceeded to upgrade controller 2. Upon upgrading the second controller, we ran the health check with no issues. We then ran the update and it ran and rebooted normally. However, something strange then happened. It took longer than normal to come back up, and when it did, we got the "cluster controllers on different code" error message that one gets when the two controllers of a cluster are running different code. But we just upgraded the second controller to 4.2, so they should have been the same, right??? Going into the Maintenance-->System screen of controller 2, we saw something very strange. The "current version" was still on 4.0, and the 4.2 code was there but was in the "previous" state with the rollback icon, as if it was the OLDER code and not the newer code. I have never seen this happen before. I would have thought it was a bad 4.2 code file, but it worked just fine with controller 1, so I don't think that was it. Other than the fact the code did not update, there was nothing else going on with this system. It had no yellow lights, no errors in the Problems section, and no errors in any of the logs. It was just out of the box a few hours ago, and didn't even have a storage pool yet. So.... We deleted the 4.2 code, uploaded it from scratch, ran the health check, and ran the upgrade again. once again, it seemed to go great, rebooted, and came back up to the same issue, where it came to 4.0 instead of 4.2. See the picture below.... HERE IS WHERE I MADE A BIG MISTAKE.... I SHOULD have instantly called support and opened a Sev 2 ticket. They could have done a shared shell and gotten the correct Fishwork engineer to look at the files and the code and determine what file was messed up and fixed it. The system was up and working just fine, it was just on an older code version, not really a huge problem at all. Instead, I went ahead and clicked the "Rollback" icon, thinking that the system would rollback to the 4.2 code.   Ouch... What happened was that the system said, "Fine, I will delete the 4.0 code and boot to your 4.2 code"... Which was stupid on my part because something was wrong with the 4.2 code file here and the 4.0 was just fine.  So now the system could not boot at all, and the 4.0 code was completely missing from the system, and even a high-level Fishworks engineer could not help us. I had messed it up good. We could only get to the ILOM, and I had to re-image the system from scratch using a hard-to-get-and-use FishStick USB drive. These are tightly controlled and difficult to get, almost always handcuffed to an engineer who will drive out to re-image a system. This took another day of my client's time.  So.... If you see a "previous version" of your system code which is actually a version higher than the current version... DO NOT ROLL IT BACK.... It did not upgrade for a very good reason. In my case, after the system was re-imaged to a code level just 3 back, we once again tried the same 4.2 code update and it worked perfectly the first time and is now great and stable.  Lesson learned.  By the way, our buddy Ryan Matthews wanted to point out the best practice and supported way of performing an upgrade of an active/active ZFSSA, where both controllers are doing some of the work. These steps would not have helpped me for the above issue, but it's important to follow the correct proceedure when doing an upgrade. 1) Upload software to both controllers and wait for it to unpack 2) On controller "A" navigate to configuration/cluster and click "takeover" 3) Wait for controller "B" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software. 4) Wait for controller "B" to apply the update and finish rebooting 5) Login to controller "B", navigate to configuration/cluster and click "takeover" 6) Wait for controller "A" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software. 7) Wait for controller "A" to apply the update and finish rebooting 8) Login to controller "B", navigate to configuration/cluster and click "failback"

    Read the article

  • Solution to Jira web service getWorklogs method error: Object of type System.Xml.XmlNode[] cannot be stored in an array of this type

    - by DigiMortal
    When using Jira web service methods that operate on work logs you may get the following error when running your .NET application: Object of type System.Xml.XmlNode[] cannot be stored in an array of this type. In this posting I will show you solution to this problem. I don’t want to go to deep in details about this problem. I think it’s enough for this posting to mention that this problem is related to one small conflict between .NET web service support and Axis. Of course, Jira team is trying to solve it but until this problem is solved you can use solution provided here. There is good solution to this problem given by Jira forum user Kostadin. You can find it from Jira forum thread RemoteWorkLog serialization from Soap Service in C#. Solution is simple – you have to use SOAP extension class to replace new class names with old ones that .NET found from WSDL. Here is the code by Kostadin. public class JiraSoapExtensions : SoapExtension {     private Stream _streamIn;     private Stream _streamOut;       public override void ProcessMessage(SoapMessage message)     {         string messageAsString;         StreamReader reader;         StreamWriter writer;           switch (message.Stage)         {             case SoapMessageStage.BeforeSerialize:                 break;             case SoapMessageStage.AfterDeserialize:                 break;             case SoapMessageStage.BeforeDeserialize:                 reader = new StreamReader(_streamOut);                 writer = new StreamWriter(_streamIn);                 messageAsString = reader.ReadToEnd();                 switch (message.MethodInfo.Name)                 {                     case "getWorklogs":                     case "addWorklogWithNewRemainingEstimate":                     case "addWorklogAndAutoAdjustRemainingEstimate":                     case "addWorklogAndRetainRemainingEstimate":                         messageAsString = messageAsString.                             .Replace("RemoteWorklogImpl", "RemoteWorklog")                             .Replace("service", "beans");                         break;                 }                 writer.Write(messageAsString);                 writer.Flush();                 _streamIn.Position = 0;                 break;             case SoapMessageStage.AfterSerialize:                 _streamIn.Position = 0;                 reader = new StreamReader(_streamIn);                 writer = new StreamWriter(_streamOut);                 messageAsString = reader.ReadToEnd();                 writer.Write(messageAsString);                 writer.Flush(); break;         }     }       public override Stream ChainStream(Stream stream)     {         _streamOut = stream;         _streamIn = new MemoryStream();         return _streamIn;     }       public override object GetInitializer(Type type)     {         return GetType();     }       public override object GetInitializer(LogicalMethodInfo info,         SoapExtensionAttribute attribute)     {         return null;     }       public override void Initialize(object initializer)     {     } } To get this extension work with Jira web service you have to add the following block to your application configuration file (under system.web section). <webServices>   <soapExtensionTypes>    <add type="JiraStudioExperiments.JiraSoapExtensions,JiraStudioExperiments"           priority="1"/>   </soapExtensionTypes> </webServices> Weird thing is that after successfully using this extension and disabling it everything still works.

    Read the article

  • iPhone Core Data Lightweight Migration error: reason = "Can't find model for source store";

    - by tul697
    Steps taken: 1. Added Data Model version: Changed my XXX.xcdatamodel to XXX.xcdatamodeId with Design - Data Model - Add Model Version. Set the new XXX 2.xcdatamodel as current version Added an attribute to XXX 2.xcdatamodel Added NSMigratePersistentStoresAutomaticallyOption and NSInferMappingModelAutomaticallyOption like most tutorials, I added the option in the addPersistentStoreWithType. ran the code and I got this error: Unresolved error Error Domain=NSCocoaErrorDomain Code=134130 UserInfo=0x146bb80 "Operation could not be completed. (Cocoa error 134130.)", { URL = file://localhost/Users/tleung/Library/Application%20Support/iPhone%20Simulator/3.0/Applications/B585CDFC-17C3-4A44-84E2-0B75893C46B8/Documents/favorites.sqlite; metadata = { NSPersistenceFrameworkVersion = 241; NSStoreModelVersionHashes = { City = <70ea1f9f aaa9af29 52d2bfe4 3071d97f 8224f765 d69928d5 e5844120 52742a35; StationStore = <40d8093a 1d7d00ec 178b4374 36dfc137 ccfa3a88 87e2d467 69e8ae7e d4c49dbb; }; NSStoreModelVersionHashesVersion = 3; NSStoreModelVersionIdentifiers = ( ); NSStoreType = SQLite; NSStoreUUID = "9DD342A6-1F68-4997-A097-096DC96D7BF3"; }; reason = "Can't find model for source store"; } I've also tried NSString *path = [[NSBundle mainBundle] pathForResource:@"YOURDB" ofType:@"momd"]; NSURL *momURL = [NSURL fileURLWithPath:path]; managedObjectModel = [[NSManagedObjectModel alloc] initWithContentsOfURL:momURL]; as suggested by other posts with no success. It seems that it can't find ANY of my models... anyone have any idea?

    Read the article

  • Wpf Resource: "Unknown Build Error, 'Path cannot be null..."

    - by Femaref
    The following is a snippet from a xaml defining a DataGrid in a Control, defining a template selector. <DataGrid.Resources> <selector:CurrencyColorSelector x:Key="currencyColorSelector"> <selector:CurrencyColorSelector.NegativeTemplate> <DataTemplate> <TextBlock Text="{Binding Balance, StringFormat=n}" Background="Red"/> </DataTemplate> </selector:CurrencyColorSelector.NegativeTemplate> <selector:CurrencyColorSelector.NormalTemplate> <DataTemplate> <TextBlock Text="{Binding Balance, StringFormat=n}"/> </DataTemplate> </selector:CurrencyColorSelector.NormalTemplate> </selector:CurrencyColorSelector> </DataGrid.Resources> Now, an error is thrown: "Unknown build error, 'Path cannot be null. Parameter name: path Line 27 Position 79.'" (Compiler or xaml validation error). I have no idea where this Path comes from, neither does my example show anything of it. If you doubleclick the error, it points to the end of the first line. Did anybody encounter such a problem and has a solution for it? The example was from here: http://www.wpftutorial.net/DataGrid.html (Row Details depending on the type of data)

    Read the article

  • Why is my Android app camera preview running out of memory on my AVD?

    - by Bryan
    I have yet to try this on an actual device, but expect similar results. Anyway, long story short, whenever I run my app on the emulator, it crashes due to an out of memory exception. My code really is essentially the same as the camera preview API demo from google, which runs perfectly fine. The only file in the app (that I created/use) is as below- package berbst.musicReader; import java.io.IOException; import android.app.Activity; import android.content.Context; import android.hardware.Camera; import android.os.Bundle; import android.view.SurfaceHolder; import android.view.SurfaceView; /********************************* * Music Reader v.0001 * Still VERY under construction. * @author Bryan * *********************************/ public class MusicReader extends Activity { private MainScreen main; @Override //Begin activity public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); main = new MainScreen(this); setContentView(main); } class MainScreen extends SurfaceView implements SurfaceHolder.Callback { SurfaceHolder sHolder; Camera cam; MainScreen(Context context) { super(context); //Set up SurfaceHolder sHolder = getHolder(); sHolder.addCallback(this); sHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); } public void surfaceCreated(SurfaceHolder holder) { // Open the camera and start viewing cam = Camera.open(); try { cam.setPreviewDisplay(holder); } catch (IOException exception) { cam.release(); cam = null; } } public void surfaceDestroyed(SurfaceHolder holder) { // Kill all our crap with the surface cam.stopPreview(); cam.release(); cam = null; } public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) { // Modify parameters to match size. Camera.Parameters params = cam.getParameters(); params.setPreviewSize(w, h); cam.setParameters(params); cam.startPreview(); } } }

    Read the article

  • "Executing SQL directly; no cursor" error when using SCOPE_IDENTITY

    - by Chris
    There wasn't much on google about this error, so I'm askin here. I'm switching a PHP web application from using MySQL to SQL Server 2008 (using ODBC, not php_mssql). Running queries or anything else isn't a problem, but when I try to do scope_identity (or any similar functions), I get the error "Executing SQL directly; no cursor". I'm doing this immediately after an insert, so it should still be in scope. Running the same insert statement then query for the insert ID works fine in SQL Server Management Studio. Here's my code right now (everything else in the database wrapper class works fine for other queries, so I'll assume it isn't relevant right now): function insert_id(){ $x = $this->query_first("SELECT SCOPE_IDENTITY('session_log') as insert_id"); echo "($x)"; return $x; } query_first being a function that returns the first result from the first field of a query (basically the equivalent of execute_scalar() on .net). The full error message: Warning: odbc_exec() [function.odbc-exec]: SQL error: [Microsoft][SQL Server Native Client 10.0][SQL Server]Executing SQL directly; no cursor., SQL state 01000 in SQLExecDirect in C:[...]\Database_MSSQL.php on line 110

    Read the article

  • Working around MySQL error "Deadlock found when trying to get lock; try restarting transaction"

    - by Anon Guy
    Hi all: I have a MySQL table with about 5,000,000 rows that are being constantly updated in small ways by parallel Perl processes connecting via DBI. The table has about 10 columns and several indexes. One fairly common operation gives rise to the following error sometimes: DBD::mysql::st execute failed: Deadlock found when trying to get lock; try restarting transaction at Db.pm line 276. The SQL statement that triggers the error is something like this: UPDATE file_table SET a_lock = 'process-1234' WHERE param1 = 'X' AND param2 = 'Y' AND param3 = 'Z' LIMIT 47 The error is triggered only sometimes. I'd estimate in 1% of calls or less. However, it never happened with a small table and has become more common as the database has grown. Note that I am using the a_lock field in file_table to ensure that the four near-identical processes I am running do not try and work on the same row. The limit is designed to break their work into small chunks. I haven't done much tuning on MySQL or DBD::mysql. MySQL is a standard Solaris deployment, and the database connection is set up as follows: my $dsn = "DBI:mysql:database=" . $DbConfig::database . ";host=${DbConfig::hostname};port=${DbConfig::port}"; my $dbh = DBI->connect($dsn, $DbConfig::username, $DbConfig::password, { RaiseError => 1, AutoCommit => 1 }) or die $DBI::errstr; I have seen online that several other people have reported similar errors and that this may be a genuine deadlock situation. I have two questions: What exactly about my situation is causing the error above? Is there a simple way to work around it or lessen its frequency? For example, how exactly do I go about "restarting transaction at Db.pm line 276"? Thanks in advance.

    Read the article

  • Implicit theme error:The property 'Content' was not found in type 'System.Windows.Controls.Control'.

    - by Mark
    I have got an error while trying to upgrade our large project to SL4. I didn't write the original theme and my theme knowlege isn't great. In my demo app I have a Label and a LabelHeader(which i have created and is just a derived class from Label with DefaultStyleKey = typeof(LabelHeader); I am styling lthem like this: <Style TargetType="themeControls:LabelHeader"> <Setter Property="Template"> <Setter.Value> <ControlTemplate> <DataInput:Label FontSize="{TemplateBinding FontSize}" FontFamily="{TemplateBinding FontFamily}" Foreground="{TemplateBinding Foreground}" Content="{TemplateBinding Content}"/> </ControlTemplate> </Setter.Value> </Setter> <Setter Property="FontFamily" Value="Tahoma"/> <Setter Property="FontSize" Value="20"/> <Setter Property="Foreground" Value="Red"/> </Style> This works in SL3 but in SL4 I get: Error: Unhandled Error in Silverlight Application Code: 2500 Category: ParserError Message: The property 'Content' was not found in type 'System.Windows.Controls.Control'. File: Line: 9 Position: 168 If I change this: Content="{TemplateBinding Content}" to Content="XXX" Then there is no error but , of course, I get XXX in my label rather than the content I set in XAML on the page Any ideas how I can get this working? Demo project here: http://walkersretreat.co.nz/files/ThemeIssue.zip (Apologies for reposting, I have so far got no answers over here: http://forums.silverlight.net/forums/p/183380/415930.aspx#415930)

    Read the article

  • How to resolve user registration and activation email error in Django registration?

    - by user2476295
    So I was just trying to setup a basic user authentication in Django and downloaded a django registration app with templates. Now when I run the server at 127.0.0.1:8000/accounts/register/ I get a basic registration page, I fill in the details and when I click submit I get this error "NoReverseMatch at /accounts/register/" Error during template rendering In template Users/sudhasinha/mysite/mysite/registration/templates/registration/activation_email.txt, error at line 4 'url' requires a non-empty first argument. The syntax changed in Django 1.5, see the docs. 1 {% load i18n %} 2 {% trans "Activate account at" %} {{ site.name }}: 3 4 http://{{ site.domain }}{**% url registration_activate activation_key %**} 5 6 {% blocktrans %}Link is valid for {{ expiration_days }} days.{% endblocktrans %} 7 This is what my activation_email.txt looks like: {% load i18n %} {% trans "Activate account at" %} {{ site.name }}: http://{{ site.domain }}{% url registration_activate activation_key %} {% blocktrans %}Link is valid for {{ expiration_days }} days.{% endblocktrans %} And this is what my registration_form.html looks like: {% extends "base.html" %} {% load i18n %} {% block content %} <form method="post" action="."> {{ form.as_p }} <input type="submit" value="{% trans 'Submit' %}" /> </form> {% endblock %} I have very minimal experience with Django and would appreciate some help to resolve this error. My urls seem to be setup correctly but I will post it if needed. Also pardon my horrible formatting

    Read the article

  • How do I repopulate the view model in ASP.NET MVC 2 after a validation error?

    - by Keltex
    I'm using ASP.NET MVC 2 and here's the issue. My View Model looks something like this. It includes some fields which are edited by the user and others which are used for display purposes. Here's a simple version public class MyModel { public decimal Price { get; set; } // for view purpose only [Required(ErrorMessage="Name Required")] public string Name { get; set; } } The controller looks something like this: public ActionResult Start(MyModel rec) { if (ModelState.IsValid) { Repository.SaveModel(rec); return RedirectToAction("NextPage"); } else { // validation error return View(rec); } } The issue is when there's a validation error and I call View(rec), I'm not sure the best way to populate my view model with the values that are displayed only. The old way of doing it, where I pass in a form collection, I would do something like this: public ActionResult Start(FormCollection collection) { var rec = Repository.LoadModel(); UpdateModel(rec); if (ModelState.IsValid) { Repository.SaveModel(rec); return RedirectToAction("NextPage"); } else { // validation error return View(rec); } } But doing this, I get an error on UpdateModel(rec): The model of type 'MyModel' could not be updated. Any ideas?

    Read the article

  • How can I track down "Template process failed: undef error" in Perl's Template Toolkit?

    - by swisstony
    I've moved a Perl CGI app from one web host to another. Everything's running fine except for Template Tookit which is giving the following error: "Template process failed: undef error - This shouldn't happen at /usr/lib/perl5/5.8.8/CGI/Carp.pm line 314." The templates are working fine on the other web host. I've set the DEBUG_ALL flag when creating the template object, but it doesn't provide any additional info about errors just loads of debug output. I can't post the template source as there's lots of client specific stuff in it. I've written a simple test template and that works okay. Just wondering if anyone had seen this error before or has any ideas on the quickest way to find a fix for it. EDIT: Here's a snippet of the code that loads and processes the template. my $vars = {}; $vars->{page_url} = $page_url; $vars->{info} = $info; $vars->{is_valid} = 0; $vars->{invalid_input} = 0; $vars->{is_warnings} = 0; $vars->{is_invalid_price} = 0; $vars->{output_from_proc} = $proc_output; ... my $file = 'clientTemplate.html'; #create ref to hash use Template::Constants qw( :debug ); my $template = Template->new( { DEBUG => DEBUG_SERVICE | DEBUG_CONTEXT | DEBUG_PROVIDER | DEBUG_PLUGINS | DEBUG_FILTERS | DEBUG_PARSER | DEBUG_DIRS, EVAL_PERL => 1, INCLUDE_PATH => [ '/home/perlstuff/templates', ], } ); $template->process( $file, $vars ) || die "Template process failed: ", $template->error(), "\n";

    Read the article

  • Unit Testing Error - The unit test adapter failed to connect to the data source or to read the data

    - by michael.lukatchik
    I'm using VSTS 2K8 and I've set up a Unit Test Project. In it, I have a test class with a method that does a simple assertion. I'm using an Excel 2007 spreadsheet as my data source. My test method looks like this: [DataSource("System.Data.Odbc", "Dsn=Excel Files;dbq=|DataDirectory|\\MyTestData.xlsx;defaultdir=C:\\TestData;driverid=1046;maxbuffersize=2048;pagetimeout=5", "Sheet1", DataAccessMethod.Sequential)] [DeploymentItem("MyTestData.xlsx")] [TestMethod()] public void State_Value_Is_Set() { string expected = "MD"; string actual = TestContext.DataRow["State"] as string; Assert.AreEqual(expected, actual); } As indicated in the method decoration attributes, my Excel spreadsheet is on my local C:/ Drive. In it, the sheet where all of my data is located is named "Sheet1". I've copied the Excel spreadsheet into my project and I've set its Build Action = "Content" and I've set its Copy to Output Directory = "Copy if Newer". When trying to run this simple unit test, I receive the following error: The unit test adapter failed to connect to the data source or to read the data. For more information on troubleshooting this error, see "Troubleshooting Data-Driven Unit Tests" (http://go.microsoft.com/fwlink/?LinkId=62412) in the MSDN Library. Error details: ERROR [42S02] [Microsoft][ODBC Excel Driver] The Microsoft Office Access database engine could not find the object 'Sheet1'. Make sure the object exists and that you spell its name and the path name correctly. I've verified that the sheet name is spelled correctly (i.e. Sheet1) and I've verified that my data sources are set correctly. Web searches haven't turned up much at all. And I'm totally stumped. All help or input is appreciated!!!!

    Read the article

  • ADO/SQL Server: What is the error code for "timeout expired"?

    - by Ian Boyd
    i'm trying to trap a "timeout expired" error from ADO. When a timeout happens, ADO returns: Number: 0x80040E31 (DB_E_ABORTLIMITREACHED in oledberr.h) SQLState: HYT00 NativeError: 0 The NativeError of zero makes sense, since the timeout is not a function of the database engine (i.e. SQL Server), but of ADO's internal timeout mechanism. The Number (i.e. the COM hresult) looks useful, but the definition of DB_E_ABORTLIMITREACHED in oledberr.h says: Execution stopped because a resource limit was reached. No results were returned. This error could apply to things besides "timeout expired" (some potentially server-side), such as a governor that limits: CPU usage I/O reads/writes network bandwidth and stops a query. The final useful piece is SQLState, which is a database-independent error code system. Unfortunately the only reference for SQLState error codes i can find have no mention of HYT00. What to do? What do do? Note: i can't trust 0x80040E31 (DB_E_ABORTLIMITREACHED) to mean "timeout expired", anymore than i could trust 0x80004005 (E_UNSPECIFIED_ERROR) to mean "Transaction was deadlocked on lock resources with another process and has been chosen as the deadlock victim". My pseudo-question becomes: does anyone have documentation on what the SQLState "HYT000" means? And my real question still remains: How can i specifically trap an ADO timeout expired exception thrown by ADO? Gotta love the questions where the developer is trying to "do the right thing", but nobody knows how to do the right thing. Also gotta love how googling for DB_E_ABORTLIMITREACHED and this question is #9, with MSDN nowhere to be found. Update 3 From the OLEdb ICommand.Execute reference: DB_E_ABORTLIMITREACHED Execution has been aborted because a resource limit has been reached. For example, a query timed out. No results have been returned. "For example", meaning not an exhaustive list.

    Read the article

  • HTTP Error 405 from a Tooltip loading another page.

    - by RandomBen
    I have a web page that uses tooltips. I am using Prototip specifically. One of the options is to use Ajax to load another page within the tool. The Ajax functionality is coming from the Prototype framework, http://www.prototypejs.org/api/ajax/request/. All I really want to load is an image. I just don't want the image to load on page creation because there are so many images. So when I put a link to a .jpg file or even a .html file I receive the error HTTP Error 405.0 - Method Not Allowed from IIS. I am running IIS7. Is this an issue with my code or an issue with IIS7? Also, the other version of the error I am getting is The HTTP verb POST used to access path '/Images/Items/tech_over_RST.jpg' is not allowed. I receive this version of the error message when I run in debug mode from VS2010. I am also using URL Routing but not MVC.

    Read the article

  • If the address of a function can not be resolved during deduction, is it SFINAE or a compiler error?

    - by Faisal Vali
    In C++0x SFINAE rules have been simplified such that any invalid expression or type that occurs in the "immediate context" of deduction does not result in a compiler error but rather in deduction failure (SFINAE). My question is this: If I take the address of an overloaded function and it can not be resolved, is that failure in the immediate-context of deduction? (i.e is it a hard error or SFINAE if it can not be resolved)? Here is some sample code: struct X { // template T* foo(T,T); // lets not over-complicate things for now void foo(char); void foo(int); }; template struct S { template struct size_map { typedef int type; }; // here is where we take the address of a possibly overloaded function template void f(T, typename size_map::type* = 0); void f(...); }; int main() { S s; // should this cause a compiler error because 'auto T = &X::foo' is invalid? s.f(3); } Gcc 4.5 states that this is a compiler error, and clang spits out an assertion violation. Here are some more related questions of interest: Does the FCD-C++0x clearly specify what should happen here? Are the compilers wrong in rejecting this code? Does the "immediate-context" of deduction need to be defined a little better? Thanks!

    Read the article

  • Problem running a Python program, error: Name 's' is not defined.

    - by Sergio Tapia
    Here's my code: #This is a game to guess a random number. import random guessTaken = 0 print("Hello! What's your name kid") myName = input() number = random.randint(1,20) print("Well, " + myName + ", I'm thinking of a number between 1 and 20.") while guessTaken < 6: print("Take a guess.") guess = input() guess = int(guess) guessTaken = guessTaken + 1 if guess < number: print("You guessed a little bit too low.") if guess > number: print("You guessed a little too high.") if guess == number: break if guess == number: guessTaken = str(guessTaken) print("Well done " + myName + "! You guessed the number in " + guessTaken + " guesses!") if guess != number: number = str(number) print("No dice kid. I was thinking of this number: " + number) This is the error I get: Name error: Name 's' is not defined. I think the problem may be that I have Python 3 installed, but the program is being interpreted by Python 2.6. I'm using Linux Mint if that can help you guys help me. Using Geany as the IDE and pressing F5 to test it. It may be loading 2.6 by default, but I don't really know. :( Edit: Error 1 is: File "GuessingGame.py", line 8, in <Module> myName = input() Error 2 is: File <string>, line 1, in <Module>

    Read the article

  • Crystal Report with Error : A number range is required here.

    - by gofor.net
    Hi Everyone, I am using the crystal report, in that i am using code like below to show the SQL data into the crystal report, string req = "{View_EODPumpTest.ROId} IN " + str + " AND " + "({View_EODPumpTest.RecordCreatedDate}>=date(" + fromDate.Year + " , " + fromDate.Month + " , " + fromDate.Day + ")" + "AND" + "{View_EODPumpTest.RecordCreatedDate}<=date(" + toDate.Year + " , " + toDate.Month + " ," + toDate.Day + " ))"; ReportDocument rep = new ReportDocument(); rep.Load(Server.MapPath("PumpTestReport.rpt")); DateTime fromDate = DateTime.Parse(Request.QueryString["fDate"].ToString()); DateTime toDate = DateTime.Parse(Request.QueryString["tDate"].ToString()); CrystalReportViewer_PumpTest.ReportSource = rep; //CrystalReportViewer1.SelectionFormula = str; rep.RecordSelectionFormula = str; CrystalDecisions.CrystalReports.Engine.TextObject from = ((CrystalDecisions.CrystalReports.Engine.TextObject)rep.ReportDefinition.ReportObjects["txtFrom"]); from.Text = fromDate.ToShortDateString(); CrystalDecisions.CrystalReports.Engine.TextObject to = ((CrystalDecisions.CrystalReports.Engine.TextObject)rep.ReportDefinition.ReportObjects["txtTO"]); to.Text = toDate.ToShortDateString(); //Session["Repo"] = rep; CrystalReportViewer_PumpTest.RefreshReport(); after running my application it executes fine with no exception but such error i am getting, A number range is required here. Error in File C:\DOCUME~1\Delmon\LOCALS~1\Temp\PumpTestReport {14E557A7-51B3-4791-9C78-B6FBAFFBD87C}.rpt: Error in formula . '{View_EODPumpTest.ROId} IN ['15739410','13465410'] AND ({View_EODPumpTest.RecordCreatedDate}>=date(2010 , 12 , 1)AND{View_EODPumpTest.RecordCreatedDate}<=date(2010 , 12 ,25 ))' A number range is required here. error. what i will do for this? Please help, Thanks in advance

    Read the article

< Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >