Search Results

Search found 25400 results on 1016 pages for 'enable manual correct'.

Page 815/1016 | < Previous Page | 811 812 813 814 815 816 817 818 819 820 821 822  | Next Page >

  • qemu-kvm virtual machine virtio network freeze under load

    - by Rick Koshi
    I'm having a problem with my virtual machines, where the network will freeze under heavy load. I'm using CentOS 6.2 as both host and guest, not using libvirt, just running qemu-kvm directly as follows: /usr/libexec/qemu-kvm \ -drive file=/data2/vm/rb-dev2-www1-vm.img,index=0,media=disk,cache=none,if=virtio \ -boot order=c \ -m 2G \ -smp cores=1,threads=2 \ -vga std \ -name rb-dev2-www1-vm \ -vnc :84,password \ -net nic,vlan=0,macaddr=52:54:20:00:00:54,model=virtio \ -net tap,vlan=0,ifname=tap84,script=/etc/qemu-ifup \ -monitor unix:/var/run/vm/rb-dev2-www1-vm.mon,server,nowait \ -rtc base=utc \ -device piix3-usb-uhci \ -device usb-tablet /etc/qemu-ifup (used by the above command) is a very simple script, containing the following: #!/bin/sh sudo /sbin/ifconfig $1 0.0.0.0 promisc up sudo /usr/sbin/brctl addif br0 $1 sleep 2 And here's the info on br0 and other interfaces: avl-host3 14# brctl show bridge name bridge id STP enabled interfaces br0 8000.180373f5521a no bond0 tap84 virbr0 8000.525400858961 yes virbr0-nic avl-host3 15# ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: em1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff 3: em2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff 4: em3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 18:03:73:f5:52:1e brd ff:ff:ff:ff:ff:ff 5: em4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 18:03:73:f5:52:20 brd ff:ff:ff:ff:ff:ff 6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff inet6 fe80::1a03:73ff:fef5:521a/64 scope link valid_lft forever preferred_lft forever 7: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff inet 172.16.1.46/24 brd 172.16.1.255 scope global br0 inet6 fe80::1a03:73ff:fef5:521a/64 scope link valid_lft forever preferred_lft forever 8: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 52:54:00:85:89:61 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 9: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 500 link/ether 52:54:00:85:89:61 brd ff:ff:ff:ff:ff:ff 12: tap84: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether ba:e8:9b:2a:ff:48 brd ff:ff:ff:ff:ff:ff inet6 fe80::b8e8:9bff:fe2a:ff48/64 scope link valid_lft forever preferred_lft forever bond0 is a bond of em1 and em2. virbr0 and virbr0-nic are vestigial interfaces left over from CentOS's default installation. They are unused (as far as I know). The guest runs perfectly until I run a large 'rsync', when the network will freeze after some seemingly-random time (usually under a minute). When it freezes, there is no network activity in or out of the guest. I can still connect to the guest's console via vnc, but it is unable to speak out its network interface. Any attempt to 'ping' from the guest gives a "Destination Host Unreachable" error for 3/4 packets and no reply for every fourth packet. Sometimes (perhaps two thirds of the time), I can bring the interface back to life by doing a "service network restart" from the guest's console. If this works (and if I do it before the rsync times out), the rsync will resume. Usually it will freeze again within a minute or two. If I repeat, the rsync will eventually finish, and I presume the machine goes back to waiting for another period of heavy load. Throughout the whole process, there are no console errors or relevant (that I can see) syslog messages on either guest or host machine. If the "service network restart" doesn't work the first time, trying again (and again and again) never seems to work. The command completes normally, with normal output, but the interface stays frozen. However, a soft reboot of the guest machine (without restarting qemu-kvm) always seems to bring it back. I am aware of the "lowest mac address" assignment problem, where the bridge takes on the mac address of the slave interface with the lowest mac address. This causes temporary network freezes, but is definitely not what's happening for me. My freezes are permanent until manual intervention, and you can see from the 'ip addr show' output above that the mac address being used by br0 is that of the physical ethernet. There are no other virtual machines running on the host. I've verified that each virtual machine on the subnet has its own unique mac address. I have rebuilt the guest machine several times, and I have tried this on three different host machines (identical hardware, built identically). Oddly, I do have one virtual host (the second of this series) which never seemed to have a problem. It never had its network freeze when it was running the same rsync during its build. It's particularly odd because it was the second build. The first, on a different host, did have the freezing problem, but the second did not. I assumed at the time that I had done something wrong with the first build, and that the problem was resolved. Unfortunately, the problem reappeared when I built the third VM. Also unfortunately, I can't do many tests with the working VM, as it's now in production use, and I'm hoping I can find the cause of this issue before that machine starts having problems. It's possible that I just got really lucky while running the rsync on the working machine, and that one time it didn't freeze. Of course it's possible that I somehow changed the build scripts without realizing it and re-broke something, but I can't find any such thing. In any case, I'm hoping someone has some idea what could cause this. Addendum: Preliminary tests suggest that I don't have the problem if I substitute e1000 for virtio in the first -net flag to qemu-kvm. I don't consider this a solution, but it is suitable for a stopgap. Has anyone else had (or better yet, solved) this problem with the virtio network driver?

    Read the article

  • Entity Framework query not returning correctly enumerated results.

    - by SkippyFire
    I have this really strange problem where my entity framework query isn't enumerating correctly. The SQL Server table I'm using has a table with a Sku field, and the column is "distinct". It isn't a key, but it doesn't contain any duplicate values. Using actual SQL with where, distinct and group by cluases I have confirmed this. However, when I do this: // Not good foreach(var product in dc.Products) or // Not good foreach(var product in dc.Products.ToList()) or // Not good foreach(var product in dc.Products.OrderBy(p => p.Sku)) the first two objects that are returned ARE THE SAME!!! The third item was technically the second item in the table, but then the fourth item was the first row from the table again!!! The only solution I have found is to use the Distinct extension method, which shouldn't really do anything in this situation: // Good foreach(var product in dc.Products.ToList().Distinct()) Another weird thing about this is that the count of the resulting queries is the same!!! So whether or not the resulting enumerable has the correct results or duplicates, I always get the number of rows in the actual table! (No I don't have a limit clause anywhere). What could possibly cause this!?!?!?

    Read the article

  • Bit reversal of an integer, ignoring integer size and endianness

    - by ??O?????
    Given an integer typedef: typedef unsigned int TYPE; or typedef unsigned long TYPE; I have the following code to reverse the bits of an integer: TYPE max_bit= (TYPE)-1; void reverse_int_setup() { TYPE bits= (TYPE)max_bit; while (bits <<= 1) max_bit= bits; } TYPE reverse_int(TYPE arg) { TYPE bit_setter= 1, bit_tester= max_bit, result= 0; for (result= 0; bit_tester; bit_tester>>= 1, bit_setter<<= 1) if (arg & bit_tester) result|= bit_setter; return result; } One just needs first to run reverse_int_setup(), which stores an integer with the highest bit turned on, then any call to reverse_int(arg) returns arg with its bits reversed (to be used as a key to a binary tree, taken from an increasing counter, but that's more or less irrelevant). Is there a platform-agnostic way to have in compile-time the correct value for max_int after the call to reverse_int_setup(); Otherwise, is there an algorithm you consider better/leaner than the one I have for reverse_int()? Thanks.

    Read the article

  • How to call PopOver Controller from UITableViewCell.accessoryView?

    - by Vic
    Hi, First I would like to say that I'm really new to ipad/ipod/iphone development, and to objective-c too. With that being said, I'm trying to develop a small application targeting the iPad, using Xcode and IB, basically, I have a table, for each UITableViewCell in the table, I added to the accessoryView a button that contains an image. Here is the code: UIImage *img = [UIImage imageNamed:@"myimage.png"]; UIButton *button = [UIButton buttonWithType:UIButtonTypeCustom]; CGRect frame = CGRectMake(0.0, 0.0, img.size.width, img.size.height); button.frame = frame; // match the button's size with the image size [button setBackgroundImage:img forState:UIControlStateNormal]; // set the button's target to this table view controller so we can interpret touch events and map that to a NSIndexSet [button addTarget:self action:@selector(checkButtonTapped:event:) forControlEvents:UIControlEventTouchUpInside]; button.backgroundColor = [UIColor clearColor]; cell.accessoryView = button; So far, so good, now the problem is that I want a PopOver control to appear when a user taps the button on the accessoryView of a cell. I tried this on the "accessoryButtonTappedForRowWithIndexPath" of the tableView: UITableViewCell *cell = [myTable cellForRowAtIndexPath:indexPath]; UIButton *button = (UIButton *)cell.accessoryView; //customViewController is the controller of the view that I want to be displayed by the PopOver controller customViewController = [[CustomViewController alloc]init]; popOverController = [[UIPopoverController alloc] initWithContentViewController: customViewController]; popOverController.popoverContentSize = CGSizeMake(147, 122); CGRect rect = button.frame; [popOverController presentPopoverFromRect:rect inView:cell.accessoryView permittedArrowDirections:UIPopoverArrowDirectionUp animated:YES]; The problem with this code is that it shows the Popover at the top of the application View, while debugging I saw the values of "rect" and they are: x = 267 y = 13 so I think it is pretty obvious why the PopOver is being displayed so up on the view, so my question is, how can I get the correct values for the PopOver to appear just below the button on the accessoryView of the cell? Also, as you can see, I'm telling it to use the "cell.accessoryView" for the "inView:" attribute, is that okay?

    Read the article

  • OCUnit & NSBundle

    - by kpower
    I created OCUnit test in concordance with "iPhone Development Guide". Here is the class I want to test: // myClass.h #import <Foundation/Foundation.h> #import <UIKit/UIKit.h> @interface myClass : NSObject { UIImage *image; } @property (readonly) UIImage *image; - (id)initWithIndex:(NSUInteger)aIndex; @end // myClass.m #import "myClass.m" @implementation myClass @synthesize image; - (id)init { return [self initWithIndex:0]; } - (id)initWithIndex:(NSUInteger)aIndex { if ((self = [super init])) { NSString *name = [[NSString alloc] initWithFormat:@"image_%i", aIndex]; NSString *path = [[NSBundle mainBundle] pathForResource:name ofType:@"png"]; image = [[UIImage alloc] initWithContentsOfFile:path]; if (nil == image) { @throw [NSException exceptionWithName:@"imageNotFound" reason:[NSString stringWithFormat:@"Image (%@) with path \"%@\" for current index (%i) wasn't found.", [name autorelease], path, aIndex] userInfo:nil]; } [path release]; } return self; } - (void)dealloc { [image release]; [super dealloc]; } @end And my unit-test (LogicTests target): // myLogic.m #import <SenTestingKit/SenTestingKit.h> #import <UIKit/UIKit.h> #import "myClass.h" @interface myLogic : SenTestCase { } - (void)testTemp; @end @implementation myLogic - (void)testTemp { STAssertNoThrow([[myClass alloc] initWithIndex:0], "myClass initialization error"); } @end All necessary frameworks, "myClass.m" and images added to target. But on build I have an error: [[myClass alloc] initWithIndex:0] raised Image (image_0) with path \"(null)\" for current index (0) wasn't found.. myClass initialization error This code (initialization) works fine in application itself (main target) and later displays correct image. I've also checked my project folder (build/Debug-iphonesimulator/LogicTests.octest/) - there are LogicTests, Info.plist and necessary image files (image_0.png is one of them). What's wrong?

    Read the article

  • Disable Autocommit in H2 with Hibernate/C3P0 ?

    - by HDave
    I have a JPA/Hibernate application and am trying to get it to run against H2 (as well as other databases). Currently I am using Atomikos for transaction and C3P0 for connection pooing. Despite my best efforts I am still seeing this in the log file (and DAO integration tests are failing): [20100613 23:06:34] DEBUG [main] SessionFactoryImpl.(242) | instantiating session factory with properties: .....edited for brevity.... hibernate.connection.autocommit=true, ....more stuff follows The connection URL to H2 has AUTOCOMMIT=OFF, but according to the H2 documentation: this will not work as expected when using a connection pool (the connection pool manager will re-enable autocommit when returning the connection to the pool, so autocommit will only be disabled the first time the connection is used So I figured (apparently correctly) that Hibernate is where I'll have to indicate I want autocommit off. I found the autocommit property documented here and I put it in my EntityManagerFactory config as follows: <bean id="myappTestLocalEmf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="myapp-core" /> <property name="persistenceUnitPostProcessors"> <bean class="com.myapp.core.persist.util.JtaPersistenceUnitPostProcessor"> <property name="jtaDataSource" ref="myappPersistTestJdbcDataSource" /> </bean> </property> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="showSql" value="true" /> <property name="database" value="$DS{hibernate.database}" /> <property name="databasePlatform" value="$DS{hibernate.dialect}" /> </bean> </property> <property name="jpaProperties"> <props> <prop key="hibernate.transaction.factory_class">com.atomikos.icatch.jta.hibernate3.AtomikosJTATransactionFactory</prop> <prop key="hibernate.transaction.manager_lookup_class">com.atomikos.icatch.jta.hibernate3.TransactionManagerLookup</prop> <prop key="hibernate.connection.autocommit">false</prop> <prop key="hibernate.format_sql">true"</prop> <prop key="hibernate.use_sql_comments">true</prop> </property> </bean>

    Read the article

  • error C2504: 'BASECLASS' : base class undefined

    - by numerical25
    I checked out a post similar to this but the linkage was different the issue was never resolved. The problem with mine is that for some reason the linker is expecting there to be a definition for the base class, but the base class is just a interface. Below is the error in it's entirety c:\users\numerical25\desktop\intro todirectx\godfiles\gxrendermanager\gxrendermanager\gxrendermanager\gxdx.h(2) : error C2504: 'GXRenderer' : base class undefined Below is the code that shows how the headers link with one another GXRenderManager.h #ifndef GXRM #define GXRM #include <windows.h> #include "GXRenderer.h" #include "GXDX.h" #include "GXGL.h" enum GXDEVICE { DIRECTX, OPENGL }; class GXRenderManager { public: static int Ignite(GXDEVICE); private: static GXRenderer *renderDevice; }; #endif at the top of GxRenderManager, there is GXRenderer , windows, GXDX, GXGL headers. I am assuming by including them all in this document. they all link to one another as if they were all in the same document. correct me if I am wrong cause that's how a view headers. Moving on... GXRenderer.h class GXRenderer { public: virtual void Render() = 0; virtual void StartUp() = 0; }; GXGL.h class GXGL: public GXRenderer { public: void Render(); void StartUp(); }; GXDX.h class GXDX: public GXRenderer { public: void Render(); void StartUp(); }; GXGL.cpp and GXDX.cpp respectively #include "GXGL.h" void GXGL::Render() { } void GXGL::StartUp() { } //...Next document #include "GXDX.h" void GXDX::Render() { } void GXDX::StartUp() { } Not sure whats going on. I think its how I am linking the documents, I am not sure.

    Read the article

  • Setting up multiple channel types (AMF/AMFX) for Flex/BlazeDs

    - by Fergal
    We've configured our Flex client to have two channels for calling our services via BlazeDS. One channel is configured to use AMFChannel and the other for HTTPChannel. Here's the services-config.xml <channel-definition id="my-amf" class="mx.messaging.channels.AMFChannel"> <endpoint url="http://{server.name}:{server.port}/{context.root}/data/messagebroker/amf" class="flex.messaging.endpoints.AMFEndpoint" /> <properties> <polling-enabled>false</polling-enabled> </properties> </channel-definition> <channel-definition id="my-amfx" class="mx.messaging.channels.HTTPChannel"> <endpoint url="http://{server.name}:{server.port}/{context.root}/data/messagebroker/amfx" class="flex.messaging.endpoints.HTTPEndpoint" /> <properties> <polling-enabled>false</polling-enabled> </properties> </channel-definition> Our flex client is written to use either AMF or AMFX depending on how we configure it. The problem is that although the client can switch between channels it sends an AMF binary payload when attempting to call services via AMFX (expecting XML). The funny thing is that we can write services-config.xml to use either AMF or AMFX individually but Flex doesn't seem to want to let us use both. Is this a bug in Flex? If not how can we get it to use the correct protocol?

    Read the article

  • JQuery - Slide Example

    - by gav
    Hi All, I want to perform a simple slide motion on an HTML element. JQuery is already available on the site in question so the next logical step for me was to look at their documentation. JQuery - Slide down When I check out their demo however, it doesn't seem to be functioning. In firebug they have an error; missing ) after argument list wyciwyg://0/http://docs.jquery.com/UI/Effects/Slide Line 18 Whilst the error seems simple, I can't work out how to correct it (On thier site by editing the JS). On my own site using the same example an error is found in the JQuery 1.4.2 script itself; jQuery.easing[specialEasing || defaultEasing] is not a function file:///home/gav/ee-workspaces/web/site/php/jquery-1.4.2.js Line 5854 I don't mean to sound lazy / rude but what's going on? Is the JQuery site and newest release actually broken, I doubt it, what am I doing wrong? I'm a CS grad with no real web dev experience so I'm not used to this method of debugging, where should I start with this? Thanks, Gav

    Read the article

  • Virtual Earth (Bing) Pin "moves" when zoom level changes

    - by Ali
    Hi guys, Created a Virtual Earth (Bing) map to show a simple pin at a particular point. Everything works right now - the pin shows up, the title and description pop up on hover. The map is initially fully zoomed into the pin, but the STRANGE problem is that when I zoom out it moves slightly lower on the map. So if I started with the pin pointing somewhere in Toronto, if I zoom out enough the pin ends up i the middle of Lake Ontario! If I pan the map, the pin correctly stays in its proper location. When I zoom back in, it moves slightly up until it's back to its original correct position! I've looked around for a solution for a while, but I can't understand it at all. Please help!! Thanks a lot! import with javascript: http://ecn.dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=6.2 $(window).ready(function(){ GetMap(); }); map = new VEMap('birdEye'); map.SetCredentials("hash key from Bing website"); map.LoadMap(new VELatLong(43.640144 ,-79.392593), 1 , VEMapStyle.BirdseyeHybrid, false, VEMapMode.Mode2D, true, null); var pin = new VEShape(VEShapeType.Pushpin, new VELatLong(43.640144 ,-79.392593)); pin.SetTitle("Goes to Title of the Pushpin"); pin.SetDescription("Goes as Description."); map.AddShape(pin);

    Read the article

  • Salesforce/PHP - outbound messages (SOAP) - memory limit issue

    - by Phill Pafford
    I'm using Salesforce to send outbound messages (via SOAP) to another server. The server can process about 8 messages at a time, but will not send back the ACK file if the SOAP request contains more than 8 messages. SF can send up to 100 outbound messages in 1 SOAP request and I think this is causing a memory issue with PHP. If I process the outbound messages 1 by 1 they all go through fine, I can even do 8 at a time with no issues. But larger sets are not working. ERROR in SF: org.xml.sax.SAXParseException: Premature end of file Looking in the HTTP error logs I see that the incoming SOAP message looks to be getting cut of which throws a PHP warning stating: Premature end of data in tag ... PHP Fatal error: Call to a member function getAttribute() on a non-object This leads me to believe that PHP is having a memory issue and can not parse the incoming message due to it's size. I was thinking I could just set: ini_set('memory_limit', '64M'); But would this be the correct approach? Is there a way I could set this to increase with the incoming SOAP request dynamically? UPDATE: Adding some code $data = fopen('php://input','rb'); $headers = getallheaders(); $content_length = $headers['Content-Length']; $buffer_length = 1000; $fread_length = $content_length + $buffer_length; $content = fread($data,$fread_length); /** * Parse values from soap string into DOM XML */ $dom = new DOMDocument(); $dom->loadXML($content); ....

    Read the article

  • Outlook Interop Send Message from Account

    - by Reiste
    Okay, the specs have changed on this one somewhat. Maybe someone can help me with this new problem. Manually, what the user is doing is opening an new message in Outlook (2007 now) which has the "From..." field exposed. They open this up, select a certain account from the Global Address List, and send the message on behalf of that account. Is this possible to do? I can get the AddressEntry from the Global address list like so: AddressList list = null; foreach (AddressList addressList in _outlookApp.Session.AddressLists) { if (addressList.Name.ToLower().Equals("global address list")) { list = addressList; break; } } if (list != null) { AddressEntry entry = null; foreach (AddressEntry addressEntry in list.AddressEntries) { if (addressEntry.Name.ToLower().Equals("outgoing mail account")) { entry = addressEntry; break; } } } But I'm not sure I can make an Account type from the Address Entry. It seems to happen manually, when they select the address to send from. How do I mirror this in the Interop? Thanks! (My Original Question): I developed a small C# program to send email using the Outlook 2007 interop. The client required that the mail not be send using the default account - they had a secondary account they needed used. No problem - I used the Microsoft.Office.Interop.Outlook.Account class to access the availabled accounts, and choose the correct one. Now, it turns out they need this to work in Outlook 2003. Of course, the Account class doesn't exist in the Outlook interop 11.0. How can I achieve the same thing with Outlook 2003? Thanks in advance.

    Read the article

  • Creating generic list of instances of a class.

    - by Jim Branson
    I have several projects where I build a dictionary from a small class. I'm using C# 2008, Visual studio 2008 and .net 3.5 This is the code: namespace ReportsTest { class Junk { public static Dictionary<string, string> getPlatKeys() { Dictionary<string, string> retPlatKeys = new Dictionary<string, string>(); SqlConnection conn = new SqlConnection("Data Source=JB55LTARL;Initial Catalog=HldiReports;Integrated Security=True"); SqlDataReader Dr = null; conn.Open(); SqlCommand cmnd = new SqlCommand("SELECT Make, Series, RedesignYear, SeriesName FROM CompPlatformKeys", conn); Dr = cmnd.ExecuteReader(); while (Dr.Read()) { utypPlatKeys rec = new utypPlatKeys(Dr); retPlatKeys.Add(rec.Make + rec.Series + rec.RedesignYear, rec.SeriesName); } conn = null; Dr = null; return retPlatKeys; } } public class utypPlatKeys { public string Make { get; set; } public string Series { get; set; } public string RedesignYear { get; set; } public string SeriesName { get; set; } public utypPlatKeys(SqlDataReader dr) { this.Make = dr.GetInt16(dr.GetOrdinal("Make")).ToString("D3"); this.Series = dr.GetInt16(dr.GetOrdinal("Series")).ToString("D3"); this.RedesignYear = dr.GetInt16(dr.GetOrdinal("RedesignYear")).ToString(); this.SeriesName = dr["SeriesName"].ToString(); } } } The immediate window shows all of the entries in retPlatKeys and if you hover over retPlatKeys after loading it indicates the number of elements like this: "retPlatKeys| Count = 923 which is correct. I went to create a new project using this pattern only now the immediate window says retPlatKeys is out of scope and hovering over retPlatKeys after loading I get something like retPlatKeys|0x0000000002578900. Any help is greatly appreciated.

    Read the article

  • Encrypting with Perl CBC and decrypting with PHP mcrypt

    - by Ed
    I have an encrypted string that was encrypted with Perl Crypt::CBC (Rijndael,cbc). The original plaintext was encrypted with encrypt_hex() method of Crypt::CBC. $encrypted_string = '52616e646f6d49567b2c89810ceddbe8d182c23ba5f6562a418e318b803a370ea25a6a8cbfe82bc6362f790821dce8441a790a7d25d3d9ea29f86e6685d0796d'; I have the 32 character key that was used. mcrypt is successfully compiled into PHP, but I'm having a very hard time trying to decrypt the string in PHP. I keep getting gibberish back. If I unpack('H*', $encrypted_string), I see 'RandomIV' followed by what looks like binary. I can't seem to correctly extract the IV and separate the actual encrypted message. I know I'm not providing my information, but I'm not sure where else to start. $cipher = 'rijndael-256'; $cipher_mode = 'cbc'; $td = mcrypt_module_open($cipher, '', $cipher_mode, ''); $key = '32 characters'; // Does this need to converted to something else before being passed? $iv = ?? // Not sure how to extract this from $encrypted_string. $token = ?? // Should be a sub-string of $encrypted_string, correct? mcrypt_generic_init($td, $key, $iv); $clear = rtrim(mdecrypt_generic($td, $token), ''); mcrypt_generic_deinit($td); mcrypt_module_close($td); echo $clear; Any help, pointers in the right direction, would be greatly appreciated. Let me know if I need to provide more information.

    Read the article

  • Amazon EC2 RSA key stopped authenticating - Permission denied (publickey)

    - by shedd
    Authenticating to our Ubuntu EC2 instance worked fine until a little while ago. All of a sudden, the key is being rejected. When we create a new instance with the keypair, we're able to connect to the instance perfectly, so it appears to be an issue with the existing instance. Port 22 is open. Any suggestions on what to look at from a configuration standpoint so we can fix this? Any thoughts on how we can get into the box? Here is the SSH debug output. Is there anything obviously amiss? Thanks so much! $ ssh -v -i ~/zzz.pem ubuntu@###.###.###.### OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to ###.###.###.### [###.###.###.###] port 22. debug1: Connection established. debug1: identity file zzz.pem type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-6ubuntu2 debug1: match: OpenSSH_5.1p1 Debian-6ubuntu2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '###.###.###.###' is known and matches the RSA host key. debug1: Found key in /zzz/.ssh/known_hosts:18 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering public key: /zzz/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Offering public key: zzz.txt debug1: Authentications that can continue: publickey debug1: Trying private key: zzz.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey).

    Read the article

  • Nunit Relative Path failing

    - by levi.siebens
    I'm having an issue with Nunit where I cannot find an image file when I run my tests and each time it looks for images it looks in the Nunit folder instead of looking inside the folder where the binary resides. Below is a detailed description of what's happening. I'm building a binary that is under test which contains the definition for some game elements and png files which will define the sprites I'm using (for sanity sake call it Binary1) Nunit runs tests from a seperate binary (Binary1Test) executing test methods against the first binary (Binary1). All tests pass, unless the test executes code in Binary1 which then requires Binary1 to use one of the image files (which are defined via a relative path). When the method is called, Nunit throws a file not found exception stating that it cannot find the file and states it's looking inside of the Program Files\Nunit.net 2.0 folder So I have no idea why the code is doing this, and to make matters more confusing when I pull up Enviornment.CurrentDirectory it gives me the correct path (the path to my debug folder) and not the path to nunit. Also if I use this instead of using the relative path, my tests will run without issue. So my question is, does anyone know why in the case of loading relative paths from within my binary that nunit decides to use it's directory instead of the directory where the binary is located and where the images are stored? Thanks.

    Read the article

  • Telerik RadAlert back button cache problem

    - by Michael VS
    Hi, I'm sure you have had this one before so if you could point me to something similar.... I have a server side creation of a RadAlert window using the usual Sys.Application.remove_load and add_load procedure however the alert keeps popping up as it seems to be caching when the user hits the back button after it has been activated. I have tried to put a onclick event on a button to clear the function using remove_load before it moves to the next page however it still doesn't seem to clear it. Its used in validation so if a user inputs failed validation it pops up. If they then go and enter correct validation it then moves onto the next page. If they then use back button this is where it pops up again. Any ideas? Server side: private void Page_Load(object sender, System.EventArgs e) { if (!IsPostBack) { btnSearch.Attributes.Add("onclick", "Sys.Application.remove_load(f);"); } } private void btnSearch_Click(object sender, System.EventArgs e) { string radalertscript = "(function(){var f = function(){radalert('Welcome to RadWindow Prometheus!', 330, 210); Sys.Application.remove_load(f);};Sys.Application.add_load(f);})()"; RadAjaxManager1.ResponseScripts.Add(radalertscript); } Ive also tried using RadAjaxManager1.ResponseScripts.Clear(); before it moves on to the next page on the postback event

    Read the article

  • jQuery UI - addClass removeClass - CSS values are stuck

    - by Jason D
    Hi, I'm trying to do a simple animation. You show the div. It animates correctly. You hide the div. Correct. You show the div again. It shows but there is no animation. It is stuck at the value of when you first interrupted it. So somehow the interpolation CSS that is happening during [add|remove]Class is getting stuck there. The second time around, the [add|remove]Class is actually running, but the css it's setting from the class is getting ignored (I think being overshadowed). How can I fix this WITHOUT resorting to .animate and hard-coded style values? The whole point was to put the animation end point in a css class. Thanks! <!doctype html> <style type="text/css"> div { width: 400px; height: 200px; } .green { background-color: green; } </style> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js" type="text/javascript"></script> <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8/jquery-ui.min.js" type="text/javascript"></script> <script type="text/javascript"> $(function() { $('#show').bind({ click: function() { showAndRun() } }) $('#hide').bind({ click: function() { $('div').stop(true, false).fadeOut('slow') } }) function showAndRun() { function pulse() { $('div').removeClass('green', 2000, function() { $(this).addClass('green', 2000, pulse) }) } $('div').stop(true, false).hide().addClass('green').fadeIn('slow', pulse) } }) </script> <input id="show" type="button" value="show" /><input id="hide" type="button" value="hide" /> <div style="display: none;"></div>

    Read the article

  • Can't get Secondary UITableViewController to display inside a UITabBarController

    - by Paul Johnston
    I've programmatically created a UITabBarController that is loaded in my App Delegate like this: - (void)applicationDidFinishLaunching:(UIApplication *)application { tabBarController = [[UITabBarController alloc] init]; myTableViewController = [[MyTableViewController alloc] init]; UINavigationController *tableNavController = [[[UINavigationController alloc] initWithRootViewController:myTableViewController] autorelease]; myTableViewController.title = @"Tab 1"; [myTableViewController release]; mySecondTableViewController = [[MySecondTableViewController alloc] init]; UINavigationController *table2NavController = [[[UINavigationController alloc] initWithRootViewController:mySecondTableViewController] autorelease]; mySecondTableViewController.title = @"Tab 2"; [mySecondTableViewController release]; tabBarController.viewControllers = [NSArray arrayWithObjects:tableNavController, table2NavController, nil]; [window addSubview:tabBarController.view]; [window makeKeyAndVisible]; } Now the issue I have is that I can get into the views no problem, but when I try and click onto any item in the Table View, I can't get a secondary table view to appear in any tab. The tabs work absolutely fine, just the secondary views. I'm using the code below in my myTableViewController to run when selecting a specific row (the code reaches the HELP line, and crashes)... - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { NSUInteger row = [indexPath row]; // this gets the correct view controller from a list of controllers SecondaryViewController *svc = [self.controllers objectAtIndex:row]; /*** HELP NEEDED WITH THIS LINE ***/ [self.navigationController pushViewController:svc animated:YES]; } Simply put, I'm trying to switch views to the new view controller whilst keeping the tabs available and using the navigation to go back and forth (like in the iTunes App). Any help appreciated. Thanks

    Read the article

  • onPause/onResume activity issues

    - by Josh
    I have a small test application I am working on which has a timer that updates a textview to countdown from 100 to 0. That works fine, but now I am trying to pause the application if the user presses the back button on the phone and then restart the timer from where they left off when they reopen the app. Here is the code I am using: @Override public void onPause() { if(this._timer_time_remaining > 0) { this.timer.cancel(); } super.onPause(); Log.v("Pausing", String.format("Pausing with %d", this._timer_time_remaining)); } @Override public void onResume() { super.onResume(); Log.v("Resuming", String.format("Resuming with %d", this._timer_time_remaining)); if(this._timer_time_remaining > 0) { setContentView(R.layout.in_game); start_timer(this._timer_time_remaining); } } The start_timer() method creates a CountDownTimer which updates the textview in the onTick method and updates the this._timer_time_remaining int variable. CountDownTimer and _timer_time_remaining are both declared at the class level like this: private CountDownTimer timer; private int _timer_time_remaining; From the Log.v() prints I see that the _timer_time_remaining variable has the correct number of seconds stored when onPause is called, but it is set back to 0 when onResume starts. Why does the variable get reset? I thought that the application would continue to run in the background with the same values. Am I missing something? This is all declared in a class that extends Activity. Thanks in advance!

    Read the article

  • Android layout with sqare buttons

    - by Mannaz
    I want to make a layout similar to this one: Four square buttons on the screen - each of those using half of the screen with/screen height (whichever is smaler). I already tried to achieve this by using a LinearLayoutbut the buttons are ending up using the correct width, but still having the height of the background (not square any more). <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent"> <LinearLayout android:layout_width="fill_parent" android:layout_height="wrap_content"> <Button android:layout_height="wrap_content" style="@style/CKMainButton" android:layout_width="fill_parent" android:text="@string/sights" android:id="@+id/ApplicationMainSight" android:layout_toLeftOf="@+id/ApplicationMainEvent"></Button> <Button android:layout_height="wrap_content" style="@style/CKMainButton" android:layout_width="fill_parent" android:text="@string/sights" android:id="@+id/ApplicationMainSight" android:layout_toLeftOf="@+id/ApplicationMainEvent"></Button> </LinearLayout> <LinearLayout android:layout_width="fill_parent" android:layout_height="wrap_content"> <Button android:layout_height="wrap_content" style="@style/CKMainButton" android:layout_weight="1" android:layout_width="fill_parent" android:text="@string/usergenerated" android:id="@+id/ApplicationMainUserGenerated"></Button> <Button android:layout_height="wrap_content" style="@style/CKMainButton" android:layout_weight="1" android:layout_width="fill_parent" android:text="@string/tours" android:id="@+id/ApplicationMainTour"></Button> </LinearLayout> </LinearLayout> It's looking like this: How can i acchieve the Layout to look like the image above?

    Read the article

  • Core Data Migration - "Can't add source store" error

    - by Tofrizer
    Hi, In my iPhone app I'm using Core Data and I've made changes to my data model that cannot be automatically migrated over (i.e. added new relationships). I added the data model version (Design - Data Model - Add Model Version) and applied my new data model changes to the new version 2. I then created a mapping object model and set the Source and Destination models to their correct data models (old and new respectively). When I run the app and call the persistentStoreCoordinator, my app barfs with the following: 2010-02-27 02:40:30.922 XXXX[73578:20b] Unresolved error Error Domain=NSCocoaErrorDomain Code=134110 UserInfo=0xfc2240 "Operation could not be completed. (Cocoa error 134110.)", { NSUnderlyingError = Error Domain=NSCocoaErrorDomain Code=134130 UserInfo=0xfbb3a0 "Operation could not be completed. (Cocoa error 134130.)"; reason = "Can't add source store"; } FWIW (not much i think) I've also made the usual code changes in persistentStoreCoordinator to use the NSMigratePersistentStoresAutomaticallyOption and NSInferMappingModelAutomaticallyOption (for future data model changes that can be automatically migrated). More relevantly, my managedObjectModel is created by calling initWithContentsOfURL where the file/resource type is "momd". I've tried updating both the source and destination model in the mapping model (Design - Mapping Model - Update XXX Model) as well as deleted the mapping model and recreated it. I've cleaned and re-built but all to no avail. I still get the above error message. Any pointers/thoughts on how I can further debug or resolve this problem please? I haven't posted any code snippets because this feels much more like a build environment issue (and my code is very standard - just the usual core data code to handle migrations using a mapping model but I'm happy to show the code if it helps). Appreciate any help. Thanks

    Read the article

  • Checking if a RoutedEvent has any handlers

    - by AK
    I've got a custom Button class, that always performs the same action when it gets clicked (opening a specific window). I'm adding a Click event that can be assigned in the button's XAML, like a regular button. When it gets clicked, I want to execute the Click event handler if one has been assigned, otherwise I want to execute the default action. The problem is that there's apparently no way to check if any handlers have been added to an event. I thought a null check on the event would do it: if (Click == null) { DefaultClickAction(); } else { RaiseEvent(new RoutedEventArgs(ClickEvent, this));; } ...but that doesn't compile. The compiler tells me that I can't do anything other than += or -= to an event outside of the defining class, event though I'm trying to do this check INSIDE the defining class. I've implemented the correct behavior myself, but it's ugly and verbose and I can't believe there isn't a built-in way to do this. I must be missing something. Here's the relevant code: public class MyButtonClass : Control { //... public static readonly RoutedEvent ClickEvent = EventManager.RegisterRoutedEvent("Click", RoutingStrategy.Bubble, typeof(RoutedEventHandler), typeof(MyButtonClass)); public event RoutedEventHandler Click { add { ClickHandlerCount++; AddHandler(ClickEvent, value); } remove { ClickHandlerCount--; RemoveHandler(ClickEvent, value); } } private int ClickHandlerCount = 0; private Boolean ClickHandlerExists { get { return ClickHandlerCount > 0; } } //... }

    Read the article

  • Correctly assigning value to a Core Data attribute with an integer data-type

    - by Gordon Fontenot
    I'm missing something here, and feeling like an idiot about it. I'm using a UIPickerView in my app, and I need to assign the row number to a 32-bit integer attribute for a Core Data object. To do this, I am using this method: -(void)pickerView:(UIPickerView *)pickerView didSelectRow:(NSInteger)row inComponent:(NSInteger)component { object.integerValue = row; } This is giving me a warning: warning: passing argument 1 of 'setIntegerValue:' makes pointer from integer without a cast What am I mixing up here? --Edit 1-- Ok, so I can get rid of the warning by changing the method to do the following: NSNumber *number = [NSNumber numberWithInteger:row]; object.integerValue = rating; However, I still get a value of 0 for object.integerValue if I use NSLog to print it out. object.integerValue has a max value of 5, so I print out number instead, and then I'm getting a number above 62,000,000. Which doesn't seem right to me, since there are 5 rows. If I NSLog the row variable, I get a number between 0 and 5. So why do I end up with a completely different number after casting the number to NSNumber? --Edit 2-- Ok, so I'm realizing that there is some fundamental idea that I don't understand. I now understand that the 60 million + number can be cast back to the correct 0-5 number by using integerValue. So, it seems my question is how can I save an integer between 0-5 to the attribute if the NSNumber that is returned is over 60 million? Do I need to be using a different data type?

    Read the article

  • Remove mailmerge data source via OpenXML

    - by Dan
    I have some code that uses OpenXML to open up a docx file, find all mailmerge fields, and replace them with data (ignoring the datasource that may have been provided). I initially tested this against a document created in Office 2007 and it seemed to work great. We then created one in 2003 based off an Excel spreadsheet data source and saved it to 2007 docx format. When we open the file produced by my code, Word warns the user that it is going to execute some SQL, specifically "SELECT * from 'Sheet1$'". It has options of Yes/No. Selecting Yes requires I find the data source. Selecting No brings me to the document, which appears to be correct. I'm not sure why I'm now seeing this pop up. Perhaps it's due to a different data source for the 2003 document? My hope was that there was a way to delete all references to any datasources and that the pop-up wouldn't show. I found this, but it doesn't seem to work. Any suggestions?

    Read the article

< Previous Page | 811 812 813 814 815 816 817 818 819 820 821 822  | Next Page >