Search Results

Search found 1017 results on 41 pages for 'persistent'.

Page 12/41 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Session and Pop Up Window

    - by imran_ku07
     Introduction :        Session is the secure state management. It allows the user to store their information in one page and access in another page. Also it is so much powerful that store any type of object. Every user's session is identified by their cookie, which client presents to server. But unfortunately when you open a new pop up window, this cookie is not post to server with request, due to which server is unable to identify the session data for current user.         In this Article i will show you how to handle this situation,  Description :         During working in a application, i was getting an Exception saying that Session is null, when a pop window opens. After seeing the problem more closely i found that ASP.NET_SessionId cookie for parent page is not post in cookie header of child (popup) window.         Therefore for making session present in both parent and child (popup) window, you have to present same cookie. For cookie sharing i passed parent SessionID in query string,   window.open('http://abc.com/s.aspx?SASID=" & Session.SessionID &','V');           and in Application_PostMapRequestHandler application Event, check if the current request has no ASP.NET_SessionId cookie and SASID query string is not null then add this cookie to Request before Session is acquired, so that Session data remain same for both parent and popup window.    Private Sub Application_PostMapRequestHandler(ByVal sender As Object, ByVal e As EventArgs)           If (Request.Cookies("ASP.NET_SessionId") Is Nothing) AndAlso (Request.QueryString("SASID") IsNot Nothing) Then               Request.Cookies.Add(New HttpCookie("ASP.NET_SessionId", Request.QueryString("SASID")))           End If       End Sub           Now access Session in your parent and child window without any problem. How this works :          ASP.NET (both Web Form or MVC) uses a cookie (ASP.NET_SessionId) to identify the user who is requesting. Cookies are may be persistent (saved permanently in user cookies ) or non-persistent (saved temporary in browser memory). ASP.NET_SessionId cookie saved as non-persistent. This means that if the user closes the browser, the cookie is immediately removed. This is a sensible step that ensures security. That's why ASP.NET unable to identify that the request is coming from the same user. Therefore every browser instance get it's own ASP.NET_SessionId. To resolve this you need to present the same parent ASP.NET_SessionId cookie to the server when open a popup window.           You can confirm this situation by using some tools like Firebug, Fiddler,  Summary :          Hopefully you will enjoy after reading this article, by seeing that how to workaround the problem of sharing Session between different browser instances by sharing their Session identifier Cookie.

    Read the article

  • VirtualBox Clone Root HD / Ubuntu / Network issue

    - by john.graves(at)oracle.com
    When you clone a root Ubuntu disk in VirtualBox, one thing that gets messed up is the network card definition.  This is because Ubuntu (as it should) uses UDEV IDs for the network device.  When you boot your new disk, the network device ID has changed, so it creates a new eth1 device.  Unfortunately, this conflicts with the VirtualBox network setup.  What to do? Boot the box (no network) Edit the /etc/udev/rules.d/70-persistent-net.rules Delete the eth0 line and modify the eth1 line to be eth0 --------- Example OLD ----------- # This file was automatically generated by the /lib/udev/write_net_rules # program, run by the persistent-net-generator.rules rules file. # # You can modify it, as long as you keep each rule on a single # line, and change only the value of the NAME= key. # PCI device 0x8086:0x100e (e1000) <-------------------- Delete these two lines SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="08:00:27:d8:8d:15", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0" # PCI device 0x8086:0x100e (e1000) ---Modify the next line and change eth1 to be eth0 SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="08:00:27:89:84:98", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1" .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } ---------------------------------------- --------- Example NEW ----------- # This file was automatically generated by the /lib/udev/write_net_rules # program, run by the persistent-net-generator.rules rules file. # # You can modify it, as long as you keep each rule on a single # line, and change only the value of the NAME= key. # PCI device 0x8086:0x100e (e1000) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="08:00:27:89:84:98", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0" .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } ----------------------------------------

    Read the article

  • iphone bookmarklet cookie persistence

    - by Larry Davis
    I have an iphone (jqtouch based) web app that uses cookies for authentication. The use flow is as follows : user goes to the mobile landing page and is instructed to save the page as a bookmarklet on their home page. they launch the bookmarklet to go to a login page to login and get a cookie. the cookie works and they can navigate throughout the web site. However this session cookie is not persistent. If they leave safari and then restart using the saved bookmarklet, the cookies set during their previous session are gone. Just using safari (ie: launch safari directly rather than through the bookmarklet) to navigate the pages works fine (ie: start safari, go to url, do login, restart safari, go back to url). I find that that the cookies that were active when the bookmarklet was created are persistent but any cookies set during the session when safari is accessed through the bookmarklet are not persistent. I'm wondering if this is a safari/iphone issue and/or if there is any way around this. Many thanks for any insight you can provide.

    Read the article

  • AppEngine JDO-QL : Problem with multiple AND and OR in query

    - by KlasE
    Hi, I'm working with App Engine(Java/JDO) and are trying to do some querying with lists. So I have the following class: @PersistenceCapable(identityType = IdentityType.APPLICATION, detachable="true") public class MyEntity { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) Key key; @Persistent List<String> tags = new ArrayList<String>(); @Persistent String otherVar; } The following JDO-QL works for me: ( tags.contains('a') || tags.contains('b') || tags.contains('c') || tags.contains('d') ) && otherVar > 'a' && otherVar < 'z' This seem to return all results where tags has one or more strings with one or more of the given values and together with the inequality search on otherVar But the following do not work: (tags.contains('a') || tags.contains('_a')) && (tags.contains('b') || tags.contains('_b')) && otherVar > 'a' && otherVar < 'z' In this case I want all hits where there is at least one a (either a or _a) and one b (either b or _b) together with the inequality search as before. But the problem is that I also get back the results where there is a but no b, which is not what I want. Maybe I have missed something obvious, or done a coding error, or possibly there is a restriction on how you can write these queries in appengine, so any tips or help would be very welcome. Regards Klas

    Read the article

  • Could not determine the size of this expression.

    - by Søren
    Hi, I have resently started to use MATLAB Simulink, and my problem is that i can't implement an AMDF function, because simulink compiler cannot determine the lengths. Simulink errors: |--------------------------------------------------------------------------------------- Could not determine the size of this expression. Function 'Embedded MATLAB Function2' (#38.728.741), line 33, column 32: "1:flength-k+1" Errors occurred during parsing of Embedded MATLAB function 'Embedded MATLAB Function2'(#38) Embedded MATLAB Interface Error: Errors occurred during parsing of Embedded MATLAB function 'Embedded MATLAB Function2'(#38) . |--------------------------------------------------------------------------------------- MY CODE: |--------------------------------------------------------------------------------------- persistent sLength persistent fLength persistent amdf % Length of the frame flength = length(frame); % Pitch period is between 2.5 ms and 19.5 ms for LPC-10 algorithm % This because this algorithm assumes the frequencyspan is 50 and 400 Hz pH = ceil((1/min(fspan))*fs); if(pH flength) pH = flength; end; pL = ceil((1/max(fspan))*fs); if(pL <= 0 || pL = flength) pL = 0; end; sLength = pH - pL; % Normalize the frame frame = frame/max(max(abs(frame))); % Allocating memory for the calculation of the amdf %amdf = zeros(1,sLength); %%%%%%%% amdf = 0; % Calculating the AMDF with unbiased normalizing for k = (pL+1):pH amdf(k-pL) = sum(abs(frame(1:flength-k+1) - frame(k:flength)))/(flength-k+1); end; % Output of the AMDF if(min(amdf) < lvlThr) voiced = 1; else voiced = 0; end; % Output of the minimum of the amdf minAMDF = min(amdf); |---------------------------------------------------------------------------------------- HELP Kind regards Søren

    Read the article

  • Core Data Inferred Migration – Automatic "lightweight" vs Manual

    - by ohhorob
    I've updated the model of an existing iPhone app in some simple ways (remove attribute, add attribute, remove index), and can use automatic lightweight migration to migrate the persistent store. Due to the typical size of the data set, the processing time is not insignificant, and warrants feedback for the user. NSMigrationManager provides a simple but useful migrationProgress value that sends KVO notifications as the migration is performed. That forms the basis of providing feedback, however attempting to use an inferred model ([NSMappingModel inferredMappingModelForSourceModel:destinationModel:error:]) results in drastically different timing for the exact same dataset. Profile results on and original iPhone (2G) Automatic inferred lightweight migration PROFILE: CacheManager -migrateStore PROFILE: 0.6130 (+0.6130) models loaded PROFILE: 1.1759 (+0.5629) delegate -CacheManagerWillMigrate: PROFILE: 1.2516 (+0.0757) persistent store coordinator loaded PROFILE: 5.1436 (+3.8920) automatic lightweight migration completed PROFILE: 5.5435 (+0.3999) delegate -CacheManagerDidFinishMigration:withError: Manual inferred migration PROFILE: CacheManager -migrateStore PROFILE: 0.6660 (+0.6660) models loaded PROFILE: 1.1471 (+0.4811) inferred mapping model generated PROFILE: 1.4046 (+0.2574) delegate -CacheManagerWillMigrate: PROFILE: 1.5058 (+0.1013) persistent store coordinator loaded PROFILE: 22.6952 (+21.1894) manual migration completed PROFILE: 23.1478 (+0.4525) delegate -CacheManagerDidFinishMigration:withError: So, with an inferred model, the manual migration takes over 5 times longer than automatic! It's a big inconsistency, and the lightweight option that NSPersistentStoreCoordinator -addPersistentStoreWithType:configuration:URL:options:error: provides absolutely no indication of progress while processing. Can anybody provide a supported way to get the migrationProgress values during automatic migration, OR a way to configure an inferred mapping model to be as fast during manual processing as automatic?

    Read the article

  • PHP Database connection practice

    - by Phill Pafford
    I have a script that connects to multiple databases (Oracle, MySQL and MSSQL), each database connection might not be used each time the script runs but all could be used in a single script execution. My question is, "Is it better to connect to all the databases once in the beginning of the script even though all the connections might not be used. Or is it better to connect to them as needed, the only catch is that I would need to have the connection call in a loop (so the database connection would be new for X amount of times in the loop). Yeah Example Code #1: // Connections at the beginning of the script $dbh_oracle = connect2db(); $dbh_mysql = connect2db(); $dbh_mssql = connect2db(); for ($i=1; $i<=5; $i++) { // NOTE: might not use all the connections $rs = queryDb($query,$dbh_*); // $dbh can be any of the 3 connections } Yeah Example Code #2: // Connections in the loop for ($i=1; $i<=5; $i++) { // NOTE: Would use all the connections but connecting multiple times $dbh_oracle = connect2db(); $dbh_mysql = connect2db(); $dbh_mssql = connect2db(); $rs_oracle = queryDb($query,$dbh_oracle); $rs_mysql = queryDb($query,$dbh_mysql); $rs_mssql = queryDb($query,$dbh_mssql); } now I know you could use a persistent connection but would that be one connection open for each database in the loop as well? Like mysql_pconnect(), mssql_pconnect() and adodb for Oracle persistent connection method. I know that persistent connection can also be resource hogs and as I'm looking for best performance/practice.

    Read the article

  • Configuring IHS server to direct traffic to the Netty component bound to a port

    - by rbot
    I have a Server Component ( based on Jboss-Netty, which could maintain & handle persistent connections ) deployed in WAS. This component when deployed & initiated within the WAS environment, binds to a port & listens for incoming HTTP connection. [ Why i had to deploy a Netty HTTP Server within WAS is another story - management requirement !! Netty is deployed in WAS as a spring bean which when initiated runs on a port in the machine, independent of WAS ] Clients (mobile app) were able to establish persistent HTTP connections (to the above URL::Port) with this netty component & send/receive requests. Now, I have to replicate this feature in our Production Environment where a IHS Server (Web Server) which sits before the WAS. What i expected is to get a IHS URL which could redirect the incoming packets to the specific PORT on WAS, so that the Client apps can establish a similar persistent http connection. Our Server Admin tried a few combinations and we are not able to identify how to proceed further on this. Your expert ideas would be highly appreciated.

    Read the article

  • Robocopy to a drive connected to a wlan router fails

    - by Ville Koskinen
    I have a wireless router and an USB hard drive connected to it. Basic file access on the command line and Explorer works flawlessly after having set up some options on the router and mapping to some folders with net use k: \\ROUTER\Folder1 /user:MYLAPTOP\Me password /persistent:yes net use n: \\ROUTER\Folder2 /user:MYLAPTOP\Me password /persistent:yes Robocopy (and using SyncToy for that matter) to a network drive however fails: robocopy c:\Files k:\Backup /MIR /Z gives There is not enough space on the disk. 2010/01/05 09:52:11 ERROR 112 (0x00000070) Accessing Destination Directory N:\ Waiting 30 seconds... The error message is misleading: there is plenty of space on the disk and the folders I'm copying are small. The router is an ASUS WL-500gp with a standard firmware. I'd appreciate if someone would be able to explain what is causing the problem and, if possible, how to fix it.

    Read the article

  • Failover tmpfs mirroring. Am I doing it right?

    - by user45286
    My goal is to have a certain directory to be available as tmpfs. There will be some modifications during server uptime in this dir and those modifications must be synced to non-tmpfs persistent dir on HDD over rsync. After server boot the latest version from non-tmpfs persistent dir must be moved to tmpfs and rsync syncing to be started. I'm afraid that rsync will erase non-tmpfs backup if tmpfs dir will be empty.. I'm doing it in this way right now: create tmpfs partition in /etc/fstab cat /etc/rc.local (pseudocode) delete "tmpfs rsync" cronjob from /var/spool/cron/crontabs if there is any cp -r /path/to/non-tmpfs-backup /path/to/tmpfs/dir append /var/spool/cron/crontabs with "tmpfs rsync" cronjob What do you think?

    Read the article

  • Memcached - doesn't seem to be working

    - by Trev
    my local.xml <session_save><![CDATA[files]]></session_save> <cache> <backend>memcached</backend> <prefix>MAGE_</prefix> <memcached> <servers> <server> <host><![CDATA[127.0.0.1]]></host> <port><![CDATA[11211]]></port> <persistent><![CDATA[1]]></persistent> </server> </servers> </memcached> </cache> /var/cache is still filling up memachced is running memcache 2685 0.0 0.3 351888 26152 ? Sl 08:07 0:19 /usr/bin/memcached -m 64 -p 11211 -u memcache -l 127.0.0.1 How do i know its working? I notice no speed increases.

    Read the article

  • Connection drops while transferring large files to one server on a network

    - by Charlotte
    My company has two sites, each with their own LAN, using site to site VPN tunnel to connect the two sites. When transferring files (especially larger files) from site1 to site2 server1, the file transfer fails. I don't think this can be a VPN issue because transferring the same files to site2 server2 which is on the same network as server1 works fine. Pings to server1 and server2 at site2 from site1 are about the same, mostly 19/20ms with the odd one up to 50ms. As server1 is DB server with a high load I thought the NIC maybe overloaded, but a transfer from site2 server1 to site2 server2 works fine, and that uses the same NIC on server1 as transfers from site1 to site2 server1. The servers are both Windows Server 2003 VMs with VMXNET 3 NICs. Site2 Server1 route print: IPv4 Route Table =========================================================================== Interface List 0x1 ........................... MS TCP Loopback interface 0x10003 ...00 50 56 99 28 9b ...... vmxnet3 Ethernet Adapter #2 0x10004 ...00 50 56 99 18 97 ...... vmxnet3 Ethernet Adapter =========================================================================== =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 172.20.10.1 172.20.10.18 10 10.10.10.0 255.255.255.0 10.10.10.70 10.10.10.70 10 10.10.10.70 255.255.255.255 127.0.0.1 127.0.0.1 10 10.255.255.255 255.255.255.255 10.10.10.70 10.10.10.70 10 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 172.20.10.0 255.255.255.0 172.20.10.18 172.20.10.18 10 172.20.10.18 255.255.255.255 127.0.0.1 127.0.0.1 10 172.20.255.255 255.255.255.255 172.20.10.18 172.20.10.18 10 224.0.0.0 240.0.0.0 10.10.10.70 10.10.10.70 10 224.0.0.0 240.0.0.0 172.20.10.18 172.20.10.18 10 255.255.255.255 255.255.255.255 10.10.10.70 10.10.10.70 1 255.255.255.255 255.255.255.255 172.20.10.18 172.20.10.18 1 Default Gateway: 172.20.10.1 =========================================================================== Persistent Routes: None Site2 Server2 route print IPv4 Route Table =========================================================================== Interface List 0x1 ........................... MS TCP Loopback interface 0x10003 ...00 50 56 99 15 00 ...... vmxnet3 Ethernet Adapter =========================================================================== =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 172.20.10.1 172.20.10.114 10 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 172.20.10.0 255.255.255.0 172.20.10.114 172.20.10.114 10 172.20.10.114 255.255.255.255 127.0.0.1 127.0.0.1 10 172.20.255.255 255.255.255.255 172.20.10.114 172.20.10.114 10 224.0.0.0 240.0.0.0 172.20.10.114 172.20.10.114 10 255.255.255.255 255.255.255.255 172.20.10.114 172.20.10.114 1 Default Gateway: 172.20.10.1 =========================================================================== Persistent Routes: None Site1 Server route print: =========================================================================== Interface List 14...00 50 56 93 00 0b ......vmxnet3 Ethernet Adapter #2 1...........................Software Loopback Interface 1 12...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 13...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.168.1 192.168.168.118 261 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.168.0 255.255.255.0 On-link 192.168.168.118 261 192.168.168.118 255.255.255.255 On-link 192.168.168.118 261 192.168.168.255 255.255.255.255 On-link 192.168.168.118 261 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.168.118 261 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.168.118 261 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 192.168.168.1 Default =========================================================================== IPv6 Route Table =========================================================================== Active Routes: If Metric Network Destination Gateway 1 306 ::1/128 On-link 14 261 fe80::/64 On-link 14 261 fe80::3c6b:996f:ef36:ee76/128 On-link 1 306 ff00::/8 On-link 14 261 ff00::/8 On-link =========================================================================== Persistent Routes: None tracert from site1 to site2 server1: Tracing route to server1 [172.20.10.18] over a maximum of 30 hops: 1 19 ms 19 ms 19 ms server1 [172.20.10.18] Trace complete. tracert from site2 server1 to site1: When this was run it went to the external IP of site2, then to a couple of external ips of the isp, then times out. Can anyone suggest any troubleshooting steps? Thanks, Charlotte.

    Read the article

  • How to enable Jetty to support cometd/reverse ajax while let it listen to port 80?

    - by janetsmith
    Hi, I would like to use cometd / reverse ajax capability of Jetty 7. I tried to configure it so it listen to port 80, instead of 8080. However, according to http://jetty.mortbay.org/jetty5/faq/faq%5Fs%5F200-General%5Ft%5Fapache.html , Apache can be configured as a HTTP/1.1 proxy to pass selected request to the Jetty using the HTTP/1.1 protocol. This is simple to configure and use, but current versions of the apache mod_proxy do not support persistent connections. As far as I know, the reverse ajax in jetty is depending on continuation (I guess it is persistent connection). So how to let jetty support reverse ajax, while coexist with apache server? Thanks.

    Read the article

  • XP shared folders not accessible after BIOS changed

    - by stijn
    Here's what worked for over a year: PC A runs Windows 7, PC B runs Windows XP. Both are on the same subnet behind a router. A uses user account X, but logs in to PC B using the Administrator account. PC B is a Dell Precision 470. A known problem with these is that sometimes when plugging in their power cable they somehow loses all BIOS settings. This happened yesterday. After this happens Windows won't boot, because the default BIOS setting is 'RAID ON' while there is no RAID configured. No problem though, changing the BIOS settings to 'RAID OFF' makes it boot without problems. Note that in the meantime, nothing config-related was changed on machine A. It wasn't even on. Indeed after doing this, everything is fine. Everything includes all normal operations, remote desktop from PC A to PC B, running Synergy between A and B, accessing shared folders from B to A. But accessing the shared folders on B from A does not work any more. I tried pretty much everything I found via Google (fiddling with policies/registry kes/...) but no avail. > ping -a 192.168.2.2 Pinging A [192.168.2.2] with 32 bytes of data: Reply from 192.168.2.2: bytes=32 time<1ms TTL=128 > net view \\192.168.2.2 System error 5 has occurred. Access is denied. > net use /persistent:no K: \\A\myshare /user:A\USERNAME PASSWORD > net use /persistent:no K: \\192.168.2.2\myshare /user:192.168.2.2\USERNAME PASSWORD > net use /persistent:no K: \\192.168.2.2\myshare /user:USERNAME PASSWORD System error 86 has occurred. The specified network password is not correct. A solution to this would be great: I haven't been able to do any work since yesterday ;] update after taking the hard drive out of B and putting it in another Precision 470 with almost exactly the same hardware (at first sight, only the video card differs) the shared folders work.. Putting the disk back into A, same problem remains. Why does this depend on hardware, and more important, on which hardware?

    Read the article

  • Cannot acess the new cloned server even after new IP address assignment

    - by tough
    I was able to clone a Ubuntu 10.04 server residing in Cloud. It appeared that I was not getting some IP for the new VM so I followed some of these: # cd /etc/udev/rules.d # cp 70-persistent-net.rules /root/ # rm 70-persistent-net.rules # reboot I didn't follow the later commands as I was unable to see two eth MACs as available in the referenced site. After this I am able to see some the IP for it, and is different form the original IP, I have added new IP to DNS server. Now when I try to access it with its assigned(new) domain it is directed to the old server. I can see both the VMs running with different IP. Where I might have gone wrong, I am new to this admin thing.

    Read the article

  • How to enable Jetty to support cometd/reverse ajax while let it listen to port 80?

    - by janetsmith
    I would like to use cometd / reverse ajax capability of Jetty 7. I tried to configure it so it listen to port 80, instead of 8080. However, according to http://jetty.mortbay.org/jetty5/faq/faq%5Fs%5F200-General%5Ft%5Fapache.html , Apache can be configured as a HTTP/1.1 proxy to pass selected request to the Jetty using the HTTP/1.1 protocol. This is simple to configure and use, but current versions of the apache mod_proxy do not support persistent connections. As far as I know, the reverse ajax in jetty is depending on continuation (I guess it is persistent connection). So how to let jetty support reverse ajax, while coexist with apache server? Thanks.

    Read the article

  • Install newer version of GCC in Knoppix

    - by Z boson
    I have Knoppix 7.30 installed on a USB with a persistent file. It comes with GCC 4.7.2. I would like to install GCC 4.9.x or 4.8.x. Being that Knopppix is based on Debian I would normally do something like apt-get upgrade But as far as I understand it's not recommended to do this with Knoppix. Warning: apt-get upgrade is a BAD IDEA. It will, quite probably, render your KNOPPIX remaster unbootable, or broken in some way. A far safer method is to only upgrade packages as necessary. I can say from experience that that is the cse. So how should I go about installing GCC 4.9? Another option would be to "remaster" knoppix to use GCC 4.9 by default rather than install it with a persistent file. I would be happy with either solution.

    Read the article

  • Run Photoshop on VM (virtual machine) to increase calculation power, without violating the licence agreement (EULA)

    - by mecplusultra
    A question concerning the possibility to run Photoshop as VM without violating the licence agreement (EULA) : My Adobe Photoshop is bloody slow, nd sometimes I need to launch thousands of image calculations that have to use hundreds of PSD templates. I want to increase my Photoshop calculation power by creating non-persistent virtual machines on my hosted server. Each VMm would only alive for a few seconds, just enough time to deliver the calculated file. Is this a violation of the EULA? I must clarify that I'm the only one to access my non persistent VMs.

    Read the article

  • Adding Core Data to existing iPhone project

    - by swalkner
    I'd like to add core data to an existing iPhone project, but I still get a lot of compile errors: NSManagedObjectContext undeclared Expected specifier-qualifier-list before 'NSManagedObjectModel' ... I already added the Core Data Framework to the target (right click on my project under "Targets", "Add" - "Existing Frameworks", "CoreData.framework"). My header-file: NSManagedObjectModel *managedObjectModel; NSManagedObjectContext *managedObjectContext; NSPersistentStoreCoordinator *persistentStoreCoordinator; [...] @property (nonatomic, retain, readonly) NSManagedObjectModel *managedObjectModel; @property (nonatomic, retain, readonly) NSManagedObjectContext *managedObjectContext; @property (nonatomic, retain, readonly) NSPersistentStoreCoordinator *persistentStoreCoordinator; What am I missing? Starting a new project is not an option... Thanks a lot! edit sorry, I do have those implementations... but it seems like the Library is missing... the implementation methods are full with compile error like "managedObjectContext undeclared", "NSPersistentStoreCoordinator undeclared", but also with "Expected ')' before NSManagedObjectContext" (although it seems like the parenthesis are correct)... #pragma mark - #pragma mark Core Data stack /** Returns the managed object context for the application. If the context doesn't already exist, it is created and bound to the persistent store coordinator for the application. */ - (NSManagedObjectContext *) managedObjectContext { if (managedObjectContext != nil) { return managedObjectContext; } NSPersistentStoreCoordinator *coordinator = [self persistentStoreCoordinator]; if (coordinator != nil) { managedObjectContext = [[NSManagedObjectContext alloc] init]; [managedObjectContext setPersistentStoreCoordinator: coordinator]; } return managedObjectContext; } /** Returns the managed object model for the application. If the model doesn't already exist, it is created by merging all of the models found in application bundle. */ - (NSManagedObjectModel *)managedObjectModel { if (managedObjectModel != nil) { return managedObjectModel; } managedObjectModel = [[NSManagedObjectModel mergedModelFromBundles:nil] retain]; return managedObjectModel; } /** Returns the persistent store coordinator for the application. If the coordinator doesn't already exist, it is created and the application's store added to it. */ - (NSPersistentStoreCoordinator *)persistentStoreCoordinator { if (persistentStoreCoordinator != nil) { return persistentStoreCoordinator; } NSURL *storeUrl = [NSURL fileURLWithPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"Core_Data.sqlite"]]; NSError *error = nil; persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]]; if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:nil error:&error]) { /* Replace this implementation with code to handle the error appropriately. abort() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development. If it is not possible to recover from the error, display an alert panel that instructs the user to quit the application by pressing the Home button. Typical reasons for an error here include: * The persistent store is not accessible * The schema for the persistent store is incompatible with current managed object model Check the error message to determine what the actual problem was. */ NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } return persistentStoreCoordinator; }

    Read the article

  • Blackberry - application settings save/load

    - by Max Gontar
    Hi! I know two ways to save/load application settings: use PersistentStore use filesystem (store, since SDCard is optional) I'd like to know what are you're practicies of working with application settings? Using PersistentStore to save/load application settings The persistent store provides a means for objects to persist across device resets. A persistent object consists of a key-value pair. When a persistent object is committed to the persistent store, that object's value is stored in flash memory via a deep copy. The value can then be retrieved at a later point in time via the key. Example of helper class for storing and retrieving settings: class PSOptions { private PersistentObject mStore; private LongHashtableCollection mSettings; private long KEY_URL = 0; private long KEY_ENCRYPT = 1; private long KEY_REFRESH_PERIOD = 2; public PSOptions() { // "AppSettings" = 0x71f1f00b95850cfeL mStore = PersistentStore.getPersistentObject(0x71f1f00b95850cfeL); } public String getUrl() { Object result = get(KEY_URL); return (null != result) ? (String) result : null; } public void setUrl(String url) { set(KEY_URL, url); } public boolean getEncrypt() { Object result = get(KEY_ENCRYPT); return (null != result) ? ((Boolean) result).booleanValue() : false; } public void setEncrypt(boolean encrypt) { set(KEY_ENCRYPT, new Boolean(encrypt)); } public int getRefreshPeriod() { Object result = get(KEY_REFRESH_PERIOD); return (null != result) ? ((Integer) result).intValue() : -1; } public void setRefreshRate(int refreshRate) { set(KEY_REFRESH_PERIOD, new Integer(refreshRate)); } private void set(long key, Object value) { synchronized (mStore) { mSettings = (LongHashtableCollection) mStore.getContents(); if (null == mSettings) { mSettings = new LongHashtableCollection(); } mSettings.put(key, value); mStore.setContents(mSettings); mStore.commit(); } } private Object get(long key) { synchronized (mStore) { mSettings = (LongHashtableCollection) mStore.getContents(); if (null != mSettings && mSettings.size() != 0) { return mSettings.get(key); } else { return null; } } } } Example of use: class Scr extends MainScreen implements FieldChangeListener { PSOptions mOptions = new PSOptions(); BasicEditField mUrl = new BasicEditField("Url:", "http://stackoverflow.com/"); CheckboxField mEncrypt = new CheckboxField("Enable encrypt", false); GaugeField mRefresh = new GaugeField("Refresh period", 1, 60 * 10, 10, GaugeField.EDITABLE|FOCUSABLE); ButtonField mLoad = new ButtonField("Load settings", ButtonField.CONSUME_CLICK); ButtonField mSave = new ButtonField("Save settings", ButtonField.CONSUME_CLICK); public Scr() { add(mUrl); mUrl.setChangeListener(this); add(mEncrypt); mEncrypt.setChangeListener(this); add(mRefresh); mRefresh.setChangeListener(this); HorizontalFieldManager hfm = new HorizontalFieldManager(USE_ALL_WIDTH); add(hfm); hfm.add(mLoad); mLoad.setChangeListener(this); hfm.add(mSave); mSave.setChangeListener(this); loadSettings(); } public void fieldChanged(Field field, int context) { if (field == mLoad) { loadSettings(); } else if (field == mSave) { saveSettings(); } } private void saveSettings() { mOptions.setUrl(mUrl.getText()); mOptions.setEncrypt(mEncrypt.getChecked()); mOptions.setRefreshRate(mRefresh.getValue()); } private void loadSettings() { mUrl.setText(mOptions.getUrl()); mEncrypt.setChecked(mOptions.getEncrypt()); mRefresh.setValue(mOptions.getRefreshPeriod()); } }

    Read the article

  • The UIManager Pattern

    - by Duncan Mills
    One of the most common mistakes that I see when reviewing ADF application code, is the sin of storing UI component references, most commonly things like table or tree components in Session or PageFlow scope. The reasons why this is bad are simple; firstly, these UI object references are not serializable so would not survive a session migration between servers and secondly there is no guarantee that the framework will re-use the same component tree from request to request, although in practice it generally does do so. So there danger here is, that at best you end up with an NPE after you session has migrated, and at worse, you end up pinning old generations of the component tree happily eating up your precious memory. So that's clear, we should never. ever, be storing references to components anywhere other than request scope (or maybe backing bean scope). So double check the scope of those binding attributes that map component references into a managed bean in your applications.  Why is it Such a Common Mistake?  At this point I want to examine why there is this urge to hold onto these references anyway? After all, JSF will obligingly populate your backing beans with the fresh and correct reference when needed.   In most cases, it seems that the rational is down to a lack of distinction within the application between what is data and what is presentation. I think perhaps, a cause of this is the logical separation between business data behind the ADF data binding (#{bindings}) façade and the UI components themselves. Developers tend to think, OK this is my data layer behind the bindings object and everything else is just UI.  Of course that's not the case.  The UI layer itself will have state which is intrinsically linked to the UI presentation rather than the business model, but at the same time should not be tighly bound to a specific instance of any single UI component. So here's the problem.  I think developers try and use the UI components as state-holders for this kind of data, rather than using them to represent that state. An example of this might be something like the selection state of a tabset (panelTabbed), you might be interested in knowing what the currently disclosed tab is. The temptation that leads to the component reference sin is to go and ask the tabset what the selection is.  That of course is fine in context - e.g. a handler within the same request scoped bean that's got the binding to the tabset. However, it leads to problems when you subsequently want the same information outside of the immediate scope.  The simple solution seems to be to chuck that component reference into session scope and then you can simply re-check in the same way, leading of course to this mistake. Turn it on its Head  So the correct solution to this is to turn the problem on its head. If you are going to be interested in the value or state of some component outside of the immediate request context then it becomes persistent state (persistent in the sense that it extends beyond the lifespan of a single request). So you need to externalize that state outside of the component and have the component reference and manipulate that state as needed rather than owning it. This is what I call the UIManager pattern.  Defining the Pattern The  UIManager pattern really is very simple. The premise is that every application should define a session scoped managed bean, appropriately named UIManger, which is specifically responsible for holding this persistent UI component related state.  The actual makeup of the UIManger class varies depending on a needs of the application and the amount of state that needs to be stored. Generally I'll start off with a Map in which individual flags can be created as required, although you could opt for a more formal set of typed member variables with getters and setters, or indeed a mix. This UIManager class is defined as a session scoped managed bean (#{uiManager}) in the faces-config.xml.  The pattern is to then inject this instance of the class into any other managed bean (usually request scope) that needs it using a managed property.  So typically you'll have something like this:   <managed-bean>     <managed-bean-name>uiManager</managed-bean-name>     <managed-bean-class>oracle.demo.view.state.UIManager</managed-bean-class>     <managed-bean-scope>session</managed-bean-scope>   </managed-bean>  When is then injected into any backing bean that needs it:    <managed-bean>     <managed-bean-name>mainPageBB</managed-bean-name>     <managed-bean-class>oracle.demo.view.MainBacking</managed-bean-class>     <managed-bean-scope>request</managed-bean-scope>     <managed-property>       <property-name>uiManager</property-name>       <property-class>oracle.demo.view.state.UIManager</property-class>       <value>#{uiManager}</value>     </managed-property>   </managed-bean> In this case the backing bean in question needs a member variable to hold and reference the UIManager: private UIManager _uiManager;  Which should be exposed via a getter and setter pair with names that match the managed property name (e.g. setUiManager(UIManager _uiManager), getUiManager()).  This will then give your code within the backing bean full access to the UI state. UI components in the page can, of course, directly reference the uiManager bean in their properties, for example, going back to the tab-set example you might have something like this: <af:paneltabbed>   <af:showDetailItem text="First"                disclosed="#{uiManager.settings['MAIN_TABSET_STATE'].['FIRST']}"> ...   </af:showDetailItem>   <af:showDetailItem text="Second"                      disclosed="#{uiManager.settings['MAIN_TABSET_STATE'].['SECOND']}">     ...   </af:showDetailItem>   ... </af:panelTabbed> Where in this case the settings member within the UI Manger is a Map which contains a Map of Booleans for each tab under the MAIN_TABSET_STATE key. (Just an example you could choose to store just an identifier for the selected tab or whatever, how you choose to store the state within UI Manger is up to you.) Get into the Habit So we can see that the UIManager pattern is not great strain to implement for an application and can even be retrofitted to an existing application with ease. The point is, however, that you should always take this approach rather than committing the sin of persistent component references which will bite you in the future or shotgun scattered UI flags on the session which are hard to maintain.  If you take the approach of always accessing all UI state via the uiManager, or perhaps a pageScope focused variant of it, you'll find your applications much easier to understand and maintain. Do it today!

    Read the article

  • JMS : Specifying Message Paging Directory on Weblogic server.

    - by adejuanc
    Two ways to configure or modify Paging directory, here the examples : 1.- Via config.xml file. <paging-directory>C:\temp</paging-directory> <jms-server> <name>JMSServerMS1</name> <target>MS1</target> <persistent-store xsi:nil="true"></persistent-store> <hosting-temporary-destinations>true</hosting-temporary-destinations> <temporary-template-resource xsi:nil="true"></temporary-template-resource> <temporary-template-name xsi:nil="true"></temporary-template-name> <message-buffer-size>-1</message-buffer-size> <paging-directory>C:\temp</paging-directory> <paging-file-locking-enabled>true</paging-file-locking-enabled> <expiration-scan-interval>30</expiration-scan-interval> </jms-server> ------------------------------------------------------- 2 .- Via WLST (Weblogic scripting tool) startEdit() cd('/Deployments/JMSServerMS1') cmo.setPagingDirectory('C:\\temp') activate()

    Read the article

  • Demo on Data Guard Protection From Lost-Write Corruption

    - by Rene Kundersma
    Today I received the news a new demo has been made available on OTN for Data Guard protection from lost-write corruption. Since this is a typical MAA solution and a very nice demo I decided to mention this great feature also in this blog even while it's a recommended best practice for some time. When lost writes occur an I/O subsystem acknowledges the completion of the block write even though the write I/O did not occur in the persistent storage. On a subsequent block read on the primary database, the I/O subsystem returns the stale version of the data block, which might be used to update other blocks of the database, thereby corrupting it.  Lost writes can occur after an OS or storage device driver failure, faulty host bus adapters, disk controller failures and volume manager errors. In the demo a data block lost write occurs when an I/O subsystem acknowledges the completion of the block write, while in fact the write did not occur in the persistent storage. When a primary database lost write corruption is detected by a Data Guard physical standby database, Redo Apply (MRP) will stop and the standby will signal an ORA-752 error to explicitly indicate a primary lost write has occurred (preventing corruption from spreading to the standby database). Links: MOS (1302539.1). "Best Practices for Corruption Detection, Prevention, and Automatic Repair - in a Data Guard Configuration" Demo MAA Best Practices Rene Kundersma

    Read the article

  • What is the correct way to restart udev in Ubuntu?

    - by zerkms
    I've changed the name of my eth1 interface to eth0. How to ask udev now to re-read the config? service udev restart and udevadm control --reload-rules don't help. So is there any valid way except of rebooting? (yes, reboot helps with this issue) UPD: yes, I know I should prepend the commands with sudo, but either one I posted above changes nothing in ifconfig -a output: I still see eth1, not eth0. UPD 2: I just changed the NAME property of udev-rule line. Don't know any reason for this to be ineffective. There is no any error in executing of both commands I've posted above, but they just don't change actual interface name in ifconfig -a output. If I perform reboot - then interface name changes as expected. UPD 3: let I explain all the case better ;-) For development purposes I write some script that clones virtual machines (VirtualBox-driven) and pre-sets them up in some way. So I perform a command to clone VM, start it and as long as network interface MAC is changed - udev adds the second rule to network persistent rules. Right after machine is booted for the first time there are 2 rules: eth0, which does not exist, as long as it existed in the original VM image MAC eth1, which exists, but all the configuration in all files refers to eth0, so it is not that good for me So I with sed delete the line with eth0 (it is obsolete and useless in cloned image) and replace eth1 with eth0. So currently I have valid persistent rule, but there is still eth1 in /dev. The issue: I don't want to reboot the machine (it will take another time, which is not good thing on building-VM-stage) and just want to have my /dev rebuilt with some command so I have ready-to-use VM without any reboots.

    Read the article

  • How can I properly create /dev/dvd?

    - by chazomaticus
    Certain programs look for /dev/dvd by default to find DVDs. When I first boot my computer without a DVD inserted, /dev/dvd exists and points to the correct place (/dev/sr0). However, when I insert a DVD, /dev/dvd disappears. I'd like it to stick around so I don't have to navigate to /dev/sr0 in programs that are looking for DVDs. How do I ensure that the /dev/dvd symlink exists and points to the right place? It looks like I can add something to /etc/udev/rules.d/70-persistent-cd.rules. This site gives a couple of examples, but the 70-persistent-cd.rules file says "add the ENV{GENERATED}=1 flag to your own rules", which isn't part of the examples. The man 7 udev page is impenetrable to me, and I'm not convinced the linked page gives 100% of the information I need. So, what can I do on a modern, Ubuntu 12.04 (or later) system to make /dev/dvd always exist and point to the right device? EDIT: Is it as simple as adding ENV{GENERATED}=1 to the rules in the linked page, something like this: SUBSYSTEM=="block", KERNEL=="sr0", SYMLINK+="dvd", GROUP="cdrom", ENV{GENERATED}=1 Is that the right information for modern Ubuntu? What is ENV{GENERATED} doing there, when it wasn't generated, but hand-written?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >