Search Results

Search found 59785 results on 2392 pages for 'time synchronization'.

Page 216/2392 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • I am unable to connect to my netbook from any machine on my network until the netbook has pinged it

    - by Samuel Husky
    I have a rather strange issue with my netbook on my local network. When trying to connect to it in any way from a remote system it does not appear to find it. However if I get the netbook to ping the machine trying to connect it mystically appears to work. Below is the ping test from my main PC to the netbook. C:\Users\Sam>ping 192.168.8.102 Pinging 192.168.8.102 with 32 bytes of data: Reply from 192.168.8.100: Destination host unreachable. Reply from 192.168.8.100: Destination host unreachable. Reply from 192.168.8.100: Destination host unreachable. Reply from 192.168.8.100: Destination host unreachable. Ping statistics for 192.168.8.102: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Now a ping from the netbook to my main PC sam@malamute ~ $ ping 192.168.8.100 PING 192.168.8.100 (192.168.8.100) 56(84) bytes of data. 64 bytes from 192.168.8.100: icmp_req=1 ttl=128 time=2.46 ms 64 bytes from 192.168.8.100: icmp_req=2 ttl=128 time=0.835 ms 64 bytes from 192.168.8.100: icmp_req=3 ttl=128 time=1.60 ms 64 bytes from 192.168.8.100: icmp_req=4 ttl=128 time=1.32 ms 64 bytes from 192.168.8.100: icmp_req=5 ttl=128 time=1.34 ms ^C --- 192.168.8.100 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4004ms rtt min/avg/max/mdev = 0.835/1.514/2.460/0.536 ms And the same ping again from the main PC after the netbook has made a connection to it C:\Users\Sam>ping 192.168.8.102 Pinging 192.168.8.102 with 32 bytes of data: Reply from 192.168.8.102: bytes=32 time=1ms TTL=64 Reply from 192.168.8.102: bytes=32 time=1ms TTL=64 Reply from 192.168.8.102: bytes=32 time=1ms TTL=64 Reply from 192.168.8.102: bytes=32 time=1ms TTL=64 Ping statistics for 192.168.8.102: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 1ms, Maximum = 1ms, Average = 1ms The netbook is running Gentoo and is currently connected via wireless. My main PC is running Windows 7 however I get the same result no matter what PC I use on this network. Please see this example from a CentOS machine on the same network [root@tiger ~]# ping 192.168.8.102 PING 192.168.8.102 (192.168.8.102) 56(84) bytes of data. From 192.168.8.200 icmp_seq=2 Destination Host Unreachable From 192.168.8.200 icmp_seq=3 Destination Host Unreachable From 192.168.8.200 icmp_seq=4 Destination Host Unreachable --- 192.168.8.102 ping statistics --- 6 packets transmitted, 0 received, +3 errors, 100% packet loss, time 5000ms , pipe 3 If you need any more information or require logs or config files please let me know and any assistance is greatly appreciated. Additional info: No responses on TCP dump from the netbook. Same result when booting into Ubuntu from a USB key. No issue when using a wired Ethernet connection.

    Read the article

  • Why is this giving me 2 different sets of timezones?

    - by chobo2
    Hi I have this line to get all the timezones Dictionary<string, TimeZoneInfo> storeZoneName = TimeZoneInfo.GetSystemTimeZones().ToDictionary(z => z.DisplayName); Now when I upload I try it on my local machine I get this (UTC-12:00) International Date Line West (UTC-11:00) Coordinated Universal Time-11 (UTC-11:00) Samoa (UTC-10:00) Hawaii (UTC-09:00) Alaska (UTC-08:00) Baja California (UTC-08:00) Pacific Time (US & Canada) (UTC-07:00) Arizona (UTC-07:00) Chihuahua, La Paz, Mazatlan (UTC-07:00) Mountain Time (US & Canada) (UTC-06:00) Central America (UTC-06:00) Central Time (US & Canada) (UTC-06:00) Guadalajara, Mexico City, Monterrey (UTC-06:00) Saskatchewan (UTC-05:00) Bogota, Lima, Quito (UTC-05:00) Eastern Time (US & Canada) (UTC-05:00) Indiana (East) (UTC-04:30) Caracas (UTC-04:00) Asuncion (UTC-04:00) Atlantic Time (Canada) (UTC-04:00) Cuiaba (UTC-04:00) Georgetown, La Paz, Manaus, San Juan (UTC-04:00) Santiago (UTC-03:30) Newfoundland (UTC-03:00) Brasilia (UTC-03:00) Buenos Aires (UTC-03:00) Cayenne, Fortaleza (UTC-03:00) Greenland (UTC-03:00) Montevideo (UTC-02:00) Coordinated Universal Time-02 (UTC-02:00) Mid-Atlantic (UTC-01:00) Azores (UTC-01:00) Cape Verde Is. (UTC) Casablanca (UTC) Coordinated Universal Time (UTC) Dublin, Edinburgh, Lisbon, London (UTC) Monrovia, Reykjavik (UTC+01:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna (UTC+01:00) Belgrade, Bratislava, Budapest, Ljubljana, Prague (UTC+01:00) Brussels, Copenhagen, Madrid, Paris (UTC+01:00) Sarajevo, Skopje, Warsaw, Zagreb (UTC+01:00) West Central Africa (UTC+02:00) Amman (UTC+02:00) Athens, Bucharest, Istanbul (UTC+02:00) Beirut (UTC+02:00) Cairo (UTC+02:00) Harare, Pretoria (UTC+02:00) Helsinki, Kyiv, Riga, Sofia, Tallinn, Vilnius (UTC+02:00) Jerusalem (UTC+02:00) Minsk (UTC+02:00) Windhoek (UTC+03:00) Baghdad (UTC+03:00) Kuwait, Riyadh (UTC+03:00) Moscow, St. Petersburg, Volgograd (UTC+03:00) Nairobi (UTC+03:30) Tehran (UTC+04:00) Abu Dhabi, Muscat (UTC+04:00) Baku (UTC+04:00) Port Louis (UTC+04:00) Tbilisi (UTC+04:00) Yerevan (UTC+04:30) Kabul (UTC+05:00) Ekaterinburg (UTC+05:00) Islamabad, Karachi (UTC+05:00) Tashkent (UTC+05:30) Chennai, Kolkata, Mumbai, New Delhi (UTC+05:30) Sri Jayawardenepura (UTC+05:45) Kathmandu (UTC+06:00) Astana (UTC+06:00) Dhaka (UTC+06:00) Novosibirsk (UTC+06:30) Yangon (Rangoon) (UTC+07:00) Bangkok, Hanoi, Jakarta (UTC+07:00) Krasnoyarsk (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi (UTC+08:00) Irkutsk (UTC+08:00) Kuala Lumpur, Singapore (UTC+08:00) Perth (UTC+08:00) Taipei (UTC+08:00) Ulaanbaatar (UTC+09:00) Osaka, Sapporo, Tokyo (UTC+09:00) Seoul (UTC+09:00) Yakutsk (UTC+09:30) Adelaide (UTC+09:30) Darwin (UTC+10:00) Brisbane (UTC+10:00) Canberra, Melbourne, Sydney (UTC+10:00) Guam, Port Moresby (UTC+10:00) Hobart (UTC+10:00) Vladivostok (UTC+11:00) Magadan, Solomon Is., New Caledonia (UTC+12:00) Auckland, Wellington (UTC+12:00) Coordinated Universal Time+12 (UTC+12:00) Fiji (UTC+12:00) Petropavlovsk-Kamchatsky (UTC+13:00) Nuku'alofa When I run it on a different local machine or my server I have this. <option value="(GMT) Casablanca">(GMT) Casablanca</option> <option value="(GMT) Greenwich Mean Time : Dublin, Edinburgh, Lisbon, London">(GMT) Greenwich Mean Time : Dublin, Edinburgh, Lisbon, London</option> <option value="(GMT) Monrovia, Reykjavik">(GMT) Monrovia, Reykjavik</option> <option value="(GMT+01:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna">(GMT+01:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna</option> <option value="(GMT+01:00) Belgrade, Bratislava, Budapest, Ljubljana, Prague">(GMT+01:00) Belgrade, Bratislava, Budapest, Ljubljana, Prague</option> <option value="(GMT+01:00) Brussels, Copenhagen, Madrid, Paris">(GMT+01:00) Brussels, Copenhagen, Madrid, Paris</option> <option value="(GMT+01:00) Sarajevo, Skopje, Warsaw, Zagreb">(GMT+01:00) Sarajevo, Skopje, Warsaw, Zagreb</option> <option value="(GMT+01:00) West Central Africa">(GMT+01:00) West Central Africa</option> <option value="(GMT+02:00) Amman">(GMT+02:00) Amman</option> <option value="(GMT+02:00) Athens, Bucharest, Istanbul">(GMT+02:00) Athens, Bucharest, Istanbul</option> <option value="(GMT+02:00) Beirut">(GMT+02:00) Beirut</option> <option value="(GMT+02:00) Cairo">(GMT+02:00) Cairo</option> <option value="(GMT+02:00) Harare, Pretoria">(GMT+02:00) Harare, Pretoria</option> <option value="(GMT+02:00) Helsinki, Kyiv, Riga, Sofia, Tallinn, Vilnius">(GMT+02:00) Helsinki, Kyiv, Riga, Sofia, Tallinn, Vilnius</option> <option value="(GMT+02:00) Jerusalem">(GMT+02:00) Jerusalem</option> <option value="(GMT+02:00) Minsk">(GMT+02:00) Minsk</option> <option value="(GMT+02:00) Windhoek">(GMT+02:00) Windhoek</option> <option value="(GMT+03:00) Baghdad">(GMT+03:00) Baghdad</option> <option value="(GMT+03:00) Kuwait, Riyadh">(GMT+03:00) Kuwait, Riyadh</option> <option value="(GMT+03:00) Moscow, St. Petersburg, Volgograd">(GMT+03:00) Moscow, St. Petersburg, Volgograd</option> <option value="(GMT+03:00) Nairobi">(GMT+03:00) Nairobi</option> <option value="(GMT+03:00) Tbilisi">(GMT+03:00) Tbilisi</option> <option value="(GMT+03:30) Tehran">(GMT+03:30) Tehran</option> <option value="(GMT+04:00) Abu Dhabi, Muscat">(GMT+04:00) Abu Dhabi, Muscat</option> <option value="(GMT+04:00) Baku">(GMT+04:00) Baku</option> <option value="(GMT+04:00) Port Louis">(GMT+04:00) Port Louis</option> <option value="(GMT+04:00) Yerevan">(GMT+04:00) Yerevan</option> <option value="(GMT+04:30) Kabul">(GMT+04:30) Kabul</option> <option value="(GMT+05:00) Ekaterinburg">(GMT+05:00) Ekaterinburg</option> <option value="(GMT+05:00) Islamabad, Karachi">(GMT+05:00) Islamabad, Karachi</option> <option value="(GMT+05:00) Tashkent">(GMT+05:00) Tashkent</option> <option value="(GMT+05:30) Chennai, Kolkata, Mumbai, New Delhi">(GMT+05:30) Chennai, Kolkata, Mumbai, New Delhi</option> <option value="(GMT+05:30) Sri Jayawardenepura">(GMT+05:30) Sri Jayawardenepura</option> <option value="(GMT+05:45) Kathmandu">(GMT+05:45) Kathmandu</option> <option value="(GMT+06:00) Almaty, Novosibirsk">(GMT+06:00) Almaty, Novosibirsk</option> <option value="(GMT+06:00) Astana, Dhaka">(GMT+06:00) Astana, Dhaka</option> <option value="(GMT+06:30) Yangon (Rangoon)">(GMT+06:30) Yangon (Rangoon)</option> <option value="(GMT+07:00) Bangkok, Hanoi, Jakarta">(GMT+07:00) Bangkok, Hanoi, Jakarta</option> <option value="(GMT+07:00) Krasnoyarsk">(GMT+07:00) Krasnoyarsk</option> <option value="(GMT+08:00) Beijing, Chongqing, Hong Kong, Urumqi">(GMT+08:00) Beijing, Chongqing, Hong Kong, Urumqi</option> <option value="(GMT+08:00) Irkutsk, Ulaan Bataar">(GMT+08:00) Irkutsk, Ulaan Bataar</option> <option value="(GMT+08:00) Kuala Lumpur, Singapore">(GMT+08:00) Kuala Lumpur, Singapore</option> <option value="(GMT+08:00) Perth">(GMT+08:00) Perth</option> <option value="(GMT+08:00) Taipei">(GMT+08:00) Taipei</option> <option value="(GMT+09:00) Osaka, Sapporo, Tokyo">(GMT+09:00) Osaka, Sapporo, Tokyo</option> <option value="(GMT+09:00) Seoul">(GMT+09:00) Seoul</option> <option value="(GMT+09:00) Yakutsk">(GMT+09:00) Yakutsk</option> <option value="(GMT+09:30) Adelaide">(GMT+09:30) Adelaide</option> <option value="(GMT+09:30) Darwin">(GMT+09:30) Darwin</option> <option value="(GMT+10:00) Brisbane">(GMT+10:00) Brisbane</option> <option value="(GMT+10:00) Canberra, Melbourne, Sydney">(GMT+10:00) Canberra, Melbourne, Sydney</option> <option value="(GMT+10:00) Guam, Port Moresby">(GMT+10:00) Guam, Port Moresby</option> <option value="(GMT+10:00) Hobart">(GMT+10:00) Hobart</option> <option value="(GMT+10:00) Vladivostok">(GMT+10:00) Vladivostok</option> <option value="(GMT+11:00) Magadan, Solomon Is., New Caledonia">(GMT+11:00) Magadan, Solomon Is., New Caledonia</option> <option value="(GMT+12:00) Auckland, Wellington">(GMT+12:00) Auckland, Wellington</option> <option value="(GMT+12:00) Fiji, Kamchatka, Marshall Is.">(GMT+12:00) Fiji, Kamchatka, Marshall Is.</option> <option value="(GMT+13:00) Nuku'alofa">(GMT+13:00) Nuku'alofa</option> <option value="(GMT-01:00) Azores">(GMT-01:00) Azores</option> <option value="(GMT-01:00) Cape Verde Is.">(GMT-01:00) Cape Verde Is.</option> <option value="(GMT-02:00) Mid-Atlantic">(GMT-02:00) Mid-Atlantic</option> <option value="(GMT-03:00) Brasilia">(GMT-03:00) Brasilia</option> <option value="(GMT-03:00) Buenos Aires">(GMT-03:00) Buenos Aires</option> <option value="(GMT-03:00) Georgetown">(GMT-03:00) Georgetown</option> <option value="(GMT-03:00) Greenland">(GMT-03:00) Greenland</option> <option value="(GMT-03:00) Montevideo">(GMT-03:00) Montevideo</option> <option value="(GMT-03:30) Newfoundland">(GMT-03:30) Newfoundland</option> <option value="(GMT-04:00) Atlantic Time (Canada)">(GMT-04:00) Atlantic Time (Canada)</option> <option value="(GMT-04:00) La Paz">(GMT-04:00) La Paz</option> <option value="(GMT-04:00) Manaus">(GMT-04:00) Manaus</option> <option value="(GMT-04:00) Santiago">(GMT-04:00) Santiago</option> <option value="(GMT-04:30) Caracas">(GMT-04:30) Caracas</option> <option value="(GMT-05:00) Bogota, Lima, Quito, Rio Branco">(GMT-05:00) Bogota, Lima, Quito, Rio Branco</option> <option value="(GMT-05:00) Eastern Time (US &amp; Canada)">(GMT-05:00) Eastern Time (US &amp; Canada)</option> <option value="(GMT-05:00) Indiana (East)">(GMT-05:00) Indiana (East)</option> <option value="(GMT-06:00) Central America">(GMT-06:00) Central America</option> <option value="(GMT-06:00) Central Time (US &amp; Canada)">(GMT-06:00) Central Time (US &amp; Canada)</option> <option value="(GMT-06:00) Guadalajara, Mexico City, Monterrey">(GMT-06:00) Guadalajara, Mexico City, Monterrey</option> <option value="(GMT-06:00) Saskatchewan">(GMT-06:00) Saskatchewan</option> <option value="(GMT-07:00) Arizona">(GMT-07:00) Arizona</option> <option value="(GMT-07:00) Chihuahua, La Paz, Mazatlan">(GMT-07:00) Chihuahua, La Paz, Mazatlan</option> <option value="(GMT-07:00) Mountain Time (US &amp; Canada)">(GMT-07:00) Mountain Time (US &amp; Canada)</option> <option value="(GMT-08:00) Pacific Time (US &amp; Canada)">(GMT-08:00) Pacific Time (US &amp; Canada)</option> <option value="(GMT-08:00) Tijuana, Baja California">(GMT-08:00) Tijuana, Baja California</option> <option value="(GMT-09:00) Alaska">(GMT-09:00) Alaska</option> <option value="(GMT-10:00) Hawaii">(GMT-10:00) Hawaii</option> <option value="(GMT-11:00) Midway Island, Samoa">(GMT-11:00) Midway Island, Samoa</option> <option value="(GMT-12:00) International Date Line West">(GMT-12:00) International Date Line West</option> They are different. Same line of code but one is GMT and one is UTC. How can I force it to be always the same? Also I want to have a default choice of "UTC" but I am not sure what the diff is between this (UTC-11:00) Coordinated Universal Time-11 and this (UTC-02:00) Coordinated Universal Time-02

    Read the article

  • OS Analytics - Deep Dive Into Your OS

    - by Eran_Steiner
    Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes. We will have a call to discuss this blog - please join us!Date: Thursday, November 1, 2012Time: 11:00 am, Eastern Daylight Time (New York, GMT-04:00)1. Go to https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833067&UID=1512092402&PW=NY2JhMmFjMmFh&RT=MiMxMQ%3D%3D2. If requested, enter your name and email address.3. If a password is required, enter the meeting password: oracle1234. Click "Join". To join the teleconference:Call-in toll-free number:       1-866-682-4770  (US/Canada)      Other countries:                https://oracle.intercallonline.com/portlets/scheduling/viewNumbers/viewNumber.do?ownerNumber=5931260&audioType=RP&viewGa=true&ga=ONConference Code:       7629343#Security code:            7777# Here is quick summary of what you can do with OS Analytics in Ops Center: View historical charts and real time value of CPU, memory, network and disk utilization Find the top CPU and Memory processes in real time or at a certain historical day Determine proper monitoring thresholds based on historical data View Solaris services status details Drill down into a process details View the busiest zones if applicable Where to start To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab. You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):  In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes: One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors. On Solaris machines with zones, you get an extra level of tabs, allowing you to get more information on the different zones: This is a good way to see the busiest zones. For example, one zone may not take a lot of CPU but it can consume a lot of memory, or perhaps network bandwidth. To see the detailed Analytics for each of the zones, simply click each of the zones in the tree and go to its Analytics tab. Next, click the "Processes" tab to see real time information of all the processes on the machine: An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty. Next, if you view a Solaris machine, you will have a "Services" tab: By default, all services will be displayed, but you can choose to display only certain states, for example, those in maintenance or the degraded ones. You can highlight a service and choose to view the details, where you can see the Dependencies, Dependents and also the location of the service log file (not shown in the picture as you need to scroll down to see the log file). The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be: You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data. It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day. Applying new monitoring values When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile. Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies". Visiting the past Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines: You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command. The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time: Under the hood The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/ An "os.zip" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken If you have any zones, there will be a file called "guests.zip" containing the same small files for all the zones, as well as a folder with the name of the zone along with "os.zip" in it If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "os.zip" for that controller The actual script collecting the data can be viewed for debugging purposes as well: On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect On Solaris, the location is /opt/SUNWxvmoc/private/os_analytics/collect If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it: # touch /tmp/.collect.stderr   The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped. If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/XVM.properties. Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values. I hope you find this helpful! Please post questions in the comments below. Eran Steiner

    Read the article

  • Oracle Flashback Technologies - Overview

    - by Sridhar_R-Oracle
    Oracle Flashback Technologies - IntroductionIn his May 29th 2014 blog, my colleague Joe Meeks introduced Oracle Maximum Availability Architecture (MAA) and discussed both planned and unplanned outages. Let’s take a closer look at unplanned outages. These can be caused by physical failures (e.g., server, storage, network, file deletion, physical corruption, site failures) or by logical failures – cases where all components and files are physically available, but data is incorrect or corrupt. These logical failures are usually caused by human errors or application logic errors. This blog series focuses on these logical errors – what causes them and how to address and recover from them using Oracle Database Flashback. In this introductory blog post, I’ll provide an overview of the Oracle Database Flashback technologies and will discuss the features in detail in future blog posts. Let’s get started. We are all human beings (unless a machine is reading this), and making mistakes is a part of what we do…often what we do best!  We “fat finger”, we spill drinks on keyboards, unplug the wrong cables, etc.  In addition, many of us, in our lives as DBAs or developers, must have observed, caused, or corrected one or more of the following unpleasant events: Accidentally updated a table with wrong values !! Performed a batch update that went wrong - due to logical errors in the code !! Dropped a table !! How do DBAs typically recover from these types of errors? First, data needs to be restored and recovered to the point-in-time when the error occurred (incomplete or point-in-time recovery).  Moreover, depending on the type of fault, it’s possible that some services – or even the entire database – would have to be taken down during the recovery process.Apart from error conditions, there are other questions that need to be addressed as part of the investigation. For example, what did the data look like in the morning, prior to the error? What were the various changes to the row(s) between two timestamps? Who performed the transaction and how can it be reversed?  Oracle Database includes built-in Flashback technologies, with features that address these challenges and questions, and enable you to perform faster, easier, and convenient recovery from logical corruptions. HistoryFlashback Query, the first Flashback Technology, was introduced in Oracle 9i. It provides a simple, powerful and completely non-disruptive mechanism for data verification and recovery from logical errors, and enables users to view the state of data at a previous point in time.Flashback Technologies were further enhanced in Oracle 10g, to provide fast, easy recovery at the database, table, row, and even at a transaction level.Oracle Database 11g introduced an innovative method to manage and query long-term historical data with Flashback Data Archive. The 11g release also introduced Flashback Transaction, which provides an easy, one-step operation to back out a transaction. Oracle Database versions 11.2.0.2 and beyond further enhanced the performance of these features. Note that all the features listed here work without requiring any kind of restore operation.In addition, Flashback features are fully supported with the new multi-tenant capabilities introduced with Oracle Database 12c, Flashback Features Oracle Flashback Database enables point-in-time-recovery of the entire database without requiring a traditional restore and recovery operation. It rewinds the entire database to a specified point in time in the past by undoing all the changes that were made since that time.Oracle Flashback Table enables an entire table or a set of tables to be recovered to a point in time in the past.Oracle Flashback Drop enables accidentally dropped tables and all dependent objects to be restored.Oracle Flashback Query enables data to be viewed at a point-in-time in the past. This feature can be used to view and reconstruct data that was lost due to unintentional change(s) or deletion(s). This feature can also be used to build self-service error correction into applications, empowering end-users to undo and correct their errors.Oracle Flashback Version Query offers the ability to query the historical changes to data between two points in time or system change numbers (SCN) Oracle Flashback Transaction Query enables changes to be examined at the transaction level. This capability can be used to diagnose problems, perform analysis, audit transactions, and even revert the transaction by undoing SQLOracle Flashback Transaction is a procedure used to back-out a transaction and its dependent transactions.Flashback technologies eliminate the need for a traditional restore and recovery process to fix logical corruptions or make enquiries. Using these technologies, you can recover from the error in the same amount of time it took to generate the error. All the Flashback features can be accessed either via SQL command line (or) via Enterprise Manager.  Most of the Flashback technologies depend on the available UNDO to retrieve older data. The following table describes the various Flashback technologies: their purpose, dependencies and situations where each individual technology can be used.   Example Syntax Error investigation related:The purpose is to investigate what went wrong and what the values were at certain points in timeFlashback Queries  ( select .. as of SCN | Timestamp )   - Helps to see the value of a row/set of rows at a point in timeFlashback Version Queries  ( select .. versions between SCN | Timestamp and SCN | Timestamp)  - Helps determine how the value evolved between certain SCNs or between timestamps Flashback Transaction Queries (select .. XID=)   - Helps to understand how the transaction caused the changes.Error correction related:The purpose is to fix the error and correct the problems,Flashback Table  (flashback table .. to SCN | Timestamp)  - To rewind the table to a particular timestamp or SCN to reverse unwanted updates Flashback Drop (flashback table ..  to before drop )  - To undrop or undelete a table Flashback Database (flashback database to SCN  | Restore Point )  - This is the rewind button for Oracle databases. You can revert the entire database to a particular point in time. It is a fast way to perform a PITR (point-in-time recovery). Flashback Transaction (DBMS_FLASHBACK.TRANSACTION_BACKOUT(XID..))  - To reverse a transaction and its related transactions Advanced use cases Flashback technology is integrated into Oracle Recovery Manager (RMAN) and Oracle Data Guard. So, apart from the basic use cases mentioned above, the following use cases are addressed using Oracle Flashback. Block Media recovery by RMAN - to perform block level recovery Snapshot Standby - where the standby is temporarily converted to a read/write environment for testing, backup, or migration purposes Re-instate old primary in a Data Guard environment – this avoids the need to restore an old backup and perform a recovery to make it a new standby. Guaranteed Restore Points - to bring back the entire database to an older point-in-time in a guaranteed way. and so on..I hope this introductory overview helps you understand how Flashback features can be used to investigate and recover from logical errors.  As mentioned earlier, I will take a deeper-dive into to some of the critical Flashback features in my upcoming blogs and address common use cases.

    Read the article

  • Profiling Startup Of VS2012 &ndash; SpeedTrace Profiler

    - by Alois Kraus
    SpeedTrace is a relatively unknown profiler made a company called Ipcas. A single professional license does cost 449€+VAT. For the test I did use SpeedTrace 4.5 which is currently Beta. Although it is cheaper than dotTrace it has by far the most options to influence how profiling does work. First you need to create a tracing project which does configure tracing for one process type. You can start the application directly from the profiler or (much more interesting) it does attach to a specific process when it is started. For this you need to check “Trace the specified …” radio button and enter the process name in the “Process Name of the Trace” edit box. You can even selectively enable tracing for processes with a specific command line. Then you need to activate the trace project by pressing the Activate Project button and you are ready to start VS as usual. If you want to profile the next 10 VS instances that you start you can set the Number of Processes counter to e.g. 10. This is immensely helpful if you are trying to profile only the next 5 started processes. As you can see there are many more tabs which do allow to influence tracing in a much more sophisticated way. SpeedTrace is the only profiler which does not rely entirely on the profiling Api of .NET. Instead it does modify the IL code (instrumentation on the fly) to write tracing information to disc which can later be analyzed. This approach is not only very fast but it does give you unprecedented analysis capabilities. Once the traces are collected they do show up in your workspace where you can open the trace viewer. I do skip the other windows because this view is by far the most useful one. You can sort the methods not only by Wall Clock time but also by CPU consumption and wait time which none of the other products support in their views at the same time. If you want to optimize for CPU consumption sort by CPU time. If you want to find out where most time is spent you need Clock Total time and Clock Waiting. There you can directly see if the method did take long because it did wait on something or it did really execute stuff that did take so long. Once you have found a method you want to drill deeper you can double click on a method to get to the Caller/Callee view which is similar to the JetBrains Method Grid view. But this time you do see much more. In the middle is the clicked method. Above are the methods that call you and below are the methods that you do directly call. Normally you would then start digging deeper to find the end of the chain where the slow method worth optimizing is located. But there is a shortcut. You can press the magic   button to calculate the aggregation of all called methods. This is displayed in the lower left window where you can see each method call and how long it did take. There you can also sort to see if this call stack does only contain methods (e.g. WCF connect calls which you cannot make faster) not worth optimizing. YourKit has a similar feature where it is called Callees List. In the Functions tab you have in the context menu also many other useful analysis options One really outstanding feature is the View Call History Drilldown. When you select this one you get not a sum of all method invocations but a list with the duration of each method call. This is not surprising since SpeedTrace does use tracing to get its timings. There you can get many useful graphs how this method did behave over time. Did it become slower at some point in time or was only the first call slow? The diagrams and the list will tell you that. That is all fine but what should I do when one method call was slow? I want to see from where it was coming from. No problem select the method in the list hit F10 and you get the call stack. This is a life saver if you e.g. search for serialization problems. Today Serializers are used everywhere. You want to find out from where the 5s XmlSerializer.Deserialize call did come from? Hit F10 and you get the call stack which did invoke the 5s Deserialize call. The CPU timeline tab is also useful to find out where long pauses or excessive CPU consumption did happen. Click in the graph to get the Thread Stacks window where you can get a quick overview what all threads were doing at this time. This does look like the Stack Traces feature in YourKit. Only this time you get the last called method first which helps to quickly see what all threads were executing at this moment. YourKit does generate a rather long list which can be hard to go through when you have many threads. The thread list in the middle does not give you call stacks or anything like that but you see which methods were found most often executing code by the profiler which is a good indication for methods consuming most CPU time. This does sound too good to be true? I have not told you the best part yet. The best thing about this profiler is the staff behind it. When I do see a crash or some other odd behavior I send a mail to Ipcas and I do get usually the next day a mail that the problem has been fixed and a download link to the new version. The guys at Ipcas are even so helpful to log in to your machine via a Citrix Client to help you to get started profiling your actual application you want to profile. After a 2h telco I was converted from a hater to a believer of this tool. The fast response time might also have something to do with the fact that they are actively working on 4.5 to get out of the door. But still the support is by far the best I have encountered so far. The only downside is that you should instrument your assemblies including the .NET Framework to get most accurate numbers. You can profile without doing it but then you will see very high JIT times in your process which can severely affect the correctness of the measured timings. If you do not care about exact numbers you can also enable in the main UI in the Data Trace tab logging of method arguments of primitive types. If you need to know what files at which times were opened by your application you can find it out without a debugger. Since SpeedTrace does read huge trace files in its reader you should perhaps use a 64 bit machine to be able to analyze bigger traces as well. The memory consumption of the trace reader is too high for my taste. But they did promise for the next version to come up with something much improved.

    Read the article

  • "A copy of Firefox is already open. Only one copy of Firefox can be open at a time."

    - by Isaac Waller
    I cannot start Firefox on my Mac. It just says "A copy of Firefox is already open. Only one copy of Firefox can be open at a time." I have tried restarting the computer. Any fixes? You have suggested deleting the lock files in my profile, but, I don't have a profile. I was trying to fix the problem in question http://superuser.com/questions/3275/firefox-on-mac-slow-slow-slow by deleting my profile, so I deleted it, and this came up. So I cannot delete the lock files because they don't exist.

    Read the article

  • "A copy of Firefox is already open. Only one copy of Firefox can be open at a time."

    - by Isaac Waller
    I cannot start Firefox on my Mac. It just says "A copy of Firefox is already open. Only one copy of Firefox can be open at a time." I have tried restarting the computer. Any fixes? You have suggested deleting the lock files in my profile, but, I don't have a profile. I was trying to fix the problem in question http://superuser.com/questions/3275/firefox-on-mac-slow-slow-slow by deleting my profile, so I deleted it, and this came up. So I cannot delete the lock files because they don't exist.

    Read the article

  • linux "date -s" command not working to change date on a server

    - by nelaar
    date +%T --set="12:19:06" 12:19:06 date Mon Nov 26 12:37:32 SAST 2012 I have tried many different forms of this command but nothing seams to work. In changing the date on this computer server running as VM is not working. Our messages log show messages like thise ntpd[3496]: time correction of -1098 seconds exceeds sanity limit (1000); set clock manually to the correct UTC time. Our server is now about 20 minutes out. It seams like our server has not been updating the time correctly for a few days. Nov 22 19:29:23 hostname ntpd[1818]: time reset -998.577519 s Nov 22 19:32:34 hostname ntpd[1818]: synchronized to LOCAL(0), stratum 10 Nov 22 19:33:39 hostname ntpd[1818]: synchronized to 41.134.20.28, stratum 1 Nov 22 19:52:30 hostname ntpd[1818]: time reset -998.992426 s Nov 22 19:55:47 hostname ntpd[1818]: synchronized to LOCAL(0), stratum 10 Nov 22 19:56:53 hostname ntpd[1818]: synchronized to 41.134.20.28, stratum 1 Nov 22 20:13:04 hostname ntpd[1818]: time reset -999.374412 s Nov 22 20:16:40 hostname ntpd[1818]: synchronized to LOCAL(0), stratum 10 Nov 22 20:17:44 hostname ntpd[1818]: synchronized to 41.134.20.28, stratum 1 Nov 22 20:32:02 hostname ntpd[1818]: time reset -999.716832 s Nov 22 20:35:28 hostname ntpd[1818]: synchronized to LOCAL(0), stratum 10 Nov 22 20:36:16 hostname ntpd[1818]: synchronized to 41.134.20.28, stratum 1 Nov 22 20:56:39 hostname ntpd[1818]: time correction of -1000 seconds exceeds sanity limit (1000); set clock manually to the correct UTC time.

    Read the article

  • PHP's page generation time takes 0.01s. 1/0.01 = 100; however i'm having problems reaching that number of request per seconds. Why?

    - by cedivad
    On average, my PHP page generation time is 10ms. So i should be able to execute 100 requests one after the other one (using a single core on the server, since that php is not multithreaded). However, i'm having problems reaching 50 pages per seconds. As of now i do 25 on avg., with a medium load. The application is really light, it consist in a read (<5KB) from a pool of SSDs, some read queries solved by indexes. Where should i look to solve this bottleneck?

    Read the article

  • Try to fill the GAE datastore but the code consumes to much cpu time. How to optimize this?

    - by Neverland
    I try to get the list of images in Amazon EC2 inside the Google datastore. I want to realize this with a cron job inside the GAE. class AmazonEC2uswest(db.Model): ami = db.StringProperty(required=True) mani = db.StringProperty() typ = db.StringProperty() arch = db.StringProperty() state = db.StringProperty() owner = db.StringProperty() class CronAMIsAmazonUS_WEST(webapp.RequestHandler): def get(self): aws_access_key_id_admin = "<secret>" aws_secret_access_key_admin = "<secret>" conn_us_west = boto.ec2.connect_to_region('us-west-1', aws_access_key_id=aws_access_key_id_admin, aws_secret_access_key=aws_secret_access_key_admin, is_secure = False) liste_images_us_west = conn_us_west.get_all_images() laenge_liste_images_us_west = len(liste_images_us_west) for i in range(laenge_liste_images_us_west): datastore_uswest_AMIs = AmazonEC2uswest(ami=liste_images_us_west[i].id, mani=str(liste_images_us_west[i].location), typ=liste_images_us_west[i].type, arch=liste_images_us_west[i].architecture, state=liste_images_us_west[i].state, owner=liste_images_us_west[i].ownerId) datastore_uswest_AMIs.put() The problem: Getting the list with get_all_images() lasts only a few seconds. But writing the data to the Google datastore needs way too much CPU time. My IBM T42p (P4M with 2GHz) needs for that piece of code approx. 1 Minute! Is it possible to optimize my code in a way that it needs fewer CPU time?

    Read the article

  • Oracle Data Integration 12c: Simplified, Future-Ready, High-Performance Solutions

    - by Thanos Terentes Printzios
    In today’s data-driven business environment, organizations need to cost-effectively manage the ever-growing streams of information originating both inside and outside the firewall and address emerging deployment styles like cloud, big data analytics, and real-time replication. Oracle Data Integration delivers pervasive and continuous access to timely and trusted data across heterogeneous systems. Oracle is enhancing its data integration offering announcing the general availability of 12c release for the key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c, delivering Simplified and High-Performance Solutions for Cloud, Big Data Analytics, and Real-Time Replication. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. With the 12c release Oracle becomes the new leader in the data integration and replication technologies as no other vendor offers such a complete set of data integration capabilities for pervasive, continuous access to trusted data across Oracle platforms as well as third-party systems and applications. Oracle Data Integration 12c release addresses data-driven organizations’ critical and evolving data integration requirements under 3 key themes: Future-Ready Solutions : Supporting Current and Emerging Initiatives Extreme Performance : Even higher performance than ever before Fast Time-to-Value : Higher IT Productivity and Simplified Solutions  With the new capabilities in Oracle Data Integrator 12c, customers can benefit from: Superior developer productivity, ease of use, and rapid time-to-market with the new flow-based mapping model, reusable mappings, and step-by-step debugger. Increased performance when executing data integration processes due to improved parallelism. Improved productivity and monitoring via tighter integration with Oracle GoldenGate 12c and Oracle Enterprise Manager 12c. Improved interoperability with Oracle Warehouse Builder which enables faster and easier migration to Oracle Data Integrator’s strategic data integration offering. Faster implementation of business analytics through Oracle Data Integrator pre-integrated with Oracle BI Applications’ latest release. Oracle Data Integrator also integrates simply and easily with Oracle Business Analytics tools, including OBI-EE and Oracle Hyperion. Support for loading and transforming big and fast data, enabled by integration with big data technologies: Hadoop, Hive, HDFS, and Oracle Big Data Appliance. Only Oracle GoldenGate provides the best-of-breed real-time replication of data in heterogeneous data environments. With the new capabilities in Oracle GoldenGate 12c, customers can benefit from: Simplified setup and management of Oracle GoldenGate 12c when using multiple database delivery processes via a new Coordinated Delivery feature for non-Oracle databases. Expanded heterogeneity through added support for the latest versions of major databases such as Sybase ASE v 15.7, MySQL NDB Clusters 7.2, and MySQL 5.6., as well as integration with Oracle Coherence. Enhanced high availability and data protection via integration with Oracle Data Guard and Fast-Start Failover integration. Enhanced security for credentials and encryption keys using Oracle Wallet. Real-time replication for databases hosted on public cloud environments supported by third-party clouds. Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c and other Oracle technologies, such as Oracle Database 12c and Oracle Applications, provides a number of benefits for organizations: Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c enables developers to leverage Oracle GoldenGate’s low overhead, real-time change data capture completely within the Oracle Data Integrator Studio without additional training. Integration with Oracle Database 12c provides a strong foundation for seamless private cloud deployments. Delivers real-time data for reporting, zero downtime migration, and improved performance and availability for Oracle Applications, such as Oracle E-Business Suite and ATG Web Commerce . Oracle’s data integration offering is optimized for Oracle Engineered Systems and is an integral part of Oracle’s fast data, real-time analytics strategy on Oracle Exadata Database Machine and Oracle Exalytics In-Memory Machine. Oracle Data Integrator 12c and Oracle GoldenGate 12c differentiate the new offering on data integration with these many new features. This is just a quick glimpse into Oracle Data Integrator 12c and Oracle GoldenGate 12c. Find out much more about the new release in the video webcast "Introducing 12c for Oracle Data Integration", where customer and partner speakers, including SolarWorld, BT, Rittman Mead will join us in launching the new release. Resource Kits Meet Oracle Data Integration 12c  Discover what's new with Oracle Goldengate 12c  Oracle EMEA DIS (Data Integration Solutions) Partner Community is available for all your questions, while additional partner focused webcasts will be made available through our blog here, so stay connected. For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Stay Connected Oracle Newsletters

    Read the article

  • SQL Server IO handling mechanism can be severely affected by high CPU usage

    - by sqlworkshops
    Are you using SSD or SAN / NAS based storage solution and sporadically observe SQL Server experiencing high IO wait times or from time to time your DAS / HDD becomes very slow according to SQL Server statistics? Read on… I need your help to up vote my connect item – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage. Instead of taking few seconds, queries could take minutes/hours to complete when CPU is busy.In SQL Server when a query / request needs to read data that is not in data cache or when the request has to write to disk, like transaction log records, the request / task will queue up the IO operation and wait for it to complete (task in suspended state, this wait time is the resource wait time). When the IO operation is complete, the task will be queued to run on the CPU. If the CPU is busy executing other tasks, this task will wait (task in runnable state) until other tasks in the queue either complete or get suspended due to waits or exhaust their quantum of 4ms (this is the signal wait time, which along with resource wait time will increase the overall wait time). When the CPU becomes free, the task will finally be run on the CPU (task in running state).The signal wait time can be up to 4ms per runnable task, this is by design. So if a CPU has 5 runnable tasks in the queue, then this query after the resource becomes available might wait up to a maximum of 5 X 4ms = 20ms in the runnable state (normally less as other tasks might not use the full quantum).In case the CPU usage is high, let’s say many CPU intensive queries are running on the instance, there is a possibility that the IO operations that are completed at the Hardware and Operating System level are not yet processed by SQL Server, keeping the task in the resource wait state for longer than necessary. In case of an SSD, the IO operation might even complete in less than a millisecond, but it might take SQL Server 100s of milliseconds, for instance, to process the completed IO operation. For example, let’s say you have a user inserting 500 rows in individual transactions. When the transaction log is on an SSD or battery backed up controller that has write cache enabled, all of these inserts will complete in 100 to 200ms. With a CPU intensive parallel query executing across all CPU cores, the same inserts might take minutes to complete. WRITELOG wait time will be very high in this case (both under sys.dm_io_virtual_file_stats and sys.dm_os_wait_stats). In addition you will notice a large number of WAITELOG waits since log records are written by LOG WRITER and hence very high signal_wait_time_ms leading to more query delays. However, Performance Monitor Counter, PhysicalDisk, Avg. Disk sec/Write will report very low latency times.Such delayed IO handling also occurs to read operations with artificially very high PAGEIOLATCH_SH wait time (with number of PAGEIOLATCH_SH waits remaining the same). This problem will manifest more and more as customers start using SSD based storage for SQL Server, since they drive the CPU usage to the limits with faster IOs. We have a few workarounds for specific scenarios, but we think Microsoft should resolve this issue at the product level. We have a connect item open – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage - (with example scripts) to reproduce this behavior, please up vote the item so the issue will be addressed by the SQL Server product team soon.Thanks for your help and best regards,Ramesh MeyyappanHome: www.sqlworkshops.comLinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Why does my Messaging Menu code not work when split into functions?

    - by fluteflute
    Below are two python programs. They're exactly the same, except for one is split into two functions. However only the one that's split into two functions doesn't work - the second function doesn't work. Why would this be? Note the code is taken from this useful blog post. Without functions (works): import gtk def show_window_function(x, y): print x print y # get the indicate module, which does all the work import indicate # Create a server item mm = indicate.indicate_server_ref_default() # If someone clicks your server item in the MM, fire the server-display signal mm.connect("server-display", show_window_function) # Set the type of messages that your item uses. It's not at all clear which types # you're allowed to use, here. mm.set_type("message.im") # You must specify a .desktop file: this is where the MM gets the name of your # app from. mm.set_desktop_file("/usr/share/applications/nautilus.desktop") # Show the item in the MM. mm.show() # Create a source item mm_source = indicate.Indicator() # Again, it's not clear which subtypes you are allowed to use here. mm_source.set_property("subtype", "im") # "Sender" is the text that appears in the source item in the MM mm_source.set_property("sender", "Unread") # If someone clicks this source item in the MM, fire the user-display signal mm_source.connect("user-display", show_window_function) # Light up the messaging menu so that people know something has changed mm_source.set_property("draw-attention", "true") # Set the count of messages in this source. mm_source.set_property("count", "15") # If you prefer, you can set the time of the last message from this source, # rather than the count. (You can't set both.) This means that instead of a # message count, the MM will show "2m" or similar for the time since this # message arrived. # mm_source.set_property_time("time", time.time()) mm_source.show() gtk.mainloop() With functions (second function is executed but doesn't actually work): import gtk def show_window_function(x, y): print x print y # get the indicate module, which does all the work import indicate def function1(): # Create a server item mm = indicate.indicate_server_ref_default() # If someone clicks your server item in the MM, fire the server-display signal mm.connect("server-display", show_window_function) # Set the type of messages that your item uses. It's not at all clear which types # you're allowed to use, here. mm.set_type("message.im") # You must specify a .desktop file: this is where the MM gets the name of your # app from. mm.set_desktop_file("/usr/share/applications/nautilus.desktop") # Show the item in the MM. mm.show() def function2(): # Create a source item mm_source = indicate.Indicator() # Again, it's not clear which subtypes you are allowed to use here. mm_source.set_property("subtype", "im") # "Sender" is the text that appears in the source item in the MM mm_source.set_property("sender", "Unread") # If someone clicks this source item in the MM, fire the user-display signal mm_source.connect("user-display", show_window_function) # Light up the messaging menu so that people know something has changed mm_source.set_property("draw-attention", "true") # Set the count of messages in this source. mm_source.set_property("count", "15") # If you prefer, you can set the time of the last message from this source, # rather than the count. (You can't set both.) This means that instead of a # message count, the MM will show "2m" or similar for the time since this # message arrived. # mm_source.set_property_time("time", time.time()) mm_source.show() function1() function2() gtk.mainloop()

    Read the article

  • Problem with TimeZoneInfo.ConvertTime: missed the Daylight Saving switch.

    - by SirMoreno
    My web app runs on .Net 3.5, all of the dates are saved on the DB in UTC time (not in user time). When I want to display a date I convert it to user date (from UTC) //Get the current datetime of the user exp: GMT TO ISRAEL +2 public static DateTime GetUserDateTime(DateTime dateUTC) { string userTzId = "Israel Standard Time"; TimeZoneInfo userTZ = TimeZoneInfo.FindSystemTimeZoneById(userTzId); dateUTC = DateTime.SpecifyKind(dateUTC, DateTimeKind.Utc); DateTime ret = TimeZoneInfo.ConvertTime(dateUTC, TimeZoneInfo.Utc, userTZ); return ret; } Until now it worked fine but I have users from Israel (GMT +2), and Israel switched to Daylight saving time on 26/3/10 so now it's (GMT +3). For some reason the TimeZoneInfo.ConvertTime don't know the Daylight saving time switch is on 26/3/10 so it still converts to GMT +2. The strange thing is that on localhost it works fine, I set up a test page: DateTime userdate = GetUserDateTime(DateTime.UtcNow); string str2 = "UserDateTime = " + userdate.ToString("dd/MM/yy") + " " + userdate.ToString("HH:mm"); On the Server (windows 2003 set to UTC time) it shows the wrong time (+2): UserDateTime = 27/03/10 21:38 On localhost (windows XP set to Israel Time) it shows the correct time (+3): UserDateTime = 27/03/10 22:38 How can I update the TimeZoneInfo that the Daylight saving time switch in Israel was on the 26/3/10? Thanks.

    Read the article

  • Cannot get Correct month for a call from call log history

    - by Nishant Kumar
    I am trying to extract information from the call log of the android. I am getting the call date that is one month back from the actual time of call. I mean to say that the information extracted by my code for the date of call is one mont back than the actual call date. I have the following in the Emulator: I saved a contact. Then I made a call to the contact. Code: I have 3 ways of extracting call Date information but getting the same wrong result. My code is as follows: /* Make the query to call log content */ Cursor callLogResult = context.getContentResolver().query( CallLog.Calls.CONTENT_URI, null, null, null, null); int columnIndex = callLogResult.getColumnIndex(Calls.DATE); Long timeInResult = callLogResult.getLong(columnIndex); /* Method 1 to change the milliseconds obtained to the readable date formate */ Time time = new Time(); time.toMillis(true); time.set(timeInResult); String callDate= time.monthDay+"-"+time.month+"-"+time.year; /* Method 2 for extracting the date from tha value read from the column */ Calendar calendar = Calendar.getInstance(); calendar.setTimeInMillis(time); String Month = calendar.get(Calendar.MONTH) ; /* Method 3 for extracting date from the result obtained */ Date date = new Date(timeInResult); String mont = date.getMonth() While using the Calendar method , I also tried to set the DayLight SAving Offset but it didnot worked, calendar.setTimeZone(TimeZone.getTimeZone("Europe/Paris")); int DST_OFFSET = calendar.get( Calendar.DST_OFFSET ); // DST_OFFSET Boolean isSet = calendar.getTimeZone().useDaylightTime(); if(isSet) calendar.set(Calendar.DST_OFFSET , 0); int reCheck = calendar.get(Calendar.DST_OFFSET ); But the value is not set to 0 in recheck. I am getting the wrong month value by using this also. Please some one help me where I am wrong? or is this the error in emulator ?? Thanks, Nishant Kumar Engineering Student

    Read the article

  • Spikes in Socket Performance

    - by Harun Prasad
    We are facing random spikes in high throughput transaction processing system using sockets for IPC. Below is the setup used for the run: The client opens and closes new connection for every transaction, and there are 4 exchanges between the server and the client. We have disabled the TIME_WAIT, by setting the socket linger (SO_LINGER) option via getsockopt as we thought that the spikes were caused due to the sockets waiting in TIME_WAIT. There is no processing done for the transaction. Only messages are passed. OS used Centos 5.4 The average round trip time is around 3 milli seconds, but some times the round trip time ranges from 100 milli seconds to couple of seconds. Steps used for Execution and Measurement and output Starting the server $ python sockServerLinger.py /dev/null & Starting the client to post 1 million transactions to the server. And logs the time for a transaction in the client.log file. $ python sockClient.py 1000000 client.log Once the execution finishes the following command will show the execution time greater than 100 milliseconds in the format <line_number>:<execution_time>. $ grep -n "0.[1-9]" client.log | less Below is the example code for Server and Client. Server # File: sockServerLinger.py import socket, traceback,time import struct host = '' port = 9999 l_onoff = 1 l_linger = 0 lingeropt = struct.pack('ii', l_onoff, l_linger) s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, lingeropt) s.bind((host, port)) s.listen(1) while 1: try: clientsock, clientaddr = s.accept() print "Got connection from", clientsock.getpeername() data = clientsock.recv(1024*1024*10) #print "asdasd",data numsent=clientsock.send(data) data1 = clientsock.recv(1024*1024*10) numsent=clientsock.send(data) ret = 1 while(ret>0): data1 = clientsock.recv(1024*1024*10) ret = len(data) clientsock.close() except KeyboardInterrupt: raise except: print traceback.print_exc() continue Client # File: sockClient.py import socket, traceback,sys import time i = 0 while 1: try: st = time.time() s = socket.socket(socket.AF_INET,socket.SOCK_STREAM) while (s.connect_ex(('127.0.0.1',9999)) != 0): continue numsent=s.send("asd"*1000) response = s.recv(6000) numsent=s.send("asd"*1000) response = s.recv(6000) i+=1 if i == int(sys.argv[1]): break except KeyboardInterrupt: raise except: print "in exec:::::::::::::",traceback.print_exc() continue print time.time() -st

    Read the article

  • SQL Server replication - Log Reader Agent Read Latency Issue, Please help

    - by envykok
    Hi all, I am facing one transactional replication delay issue on log reader agent. The log reader output is : ********* STATISTICS SINCE AGENT STARTED ************** 02-28-2011 20:12:08 Execution time (ms): 304141 Work time (ms): 304016 Distribute Repl Cmds Time(ms): 303764 Fetch time(ms): 300813 Repldone time(ms): 1826 Write time(ms): 5319 Num Trans: 15500 Num Trans/Sec: 50.984159 Num Cmds: 191639 Num Cmds/Sec: 630.358271 It seems Log Reader Reader-Thread Latency, and I also run 'sp_replcounters' and see more than 20,000 sec replication latency and keep on increasing. I used SQL profiler to monitor sp_replcmds and found sp_replcmds execution time was 11 sec to 15 sec Is it there any way to optimize to make Log Reader read faster from transaction log??? Other information: SQL Server 2008 (SP2) Standard 64 bit

    Read the article

  • How does the Twitter rate limit API work with multiple accounts?

    - by dfrankow
    I know there's a Rest API to check the Twitter rate limit. To summarize policy: 150 for an IP, and 150 per non-whitelisted account except for searches (which are IP only). However, my app is using Twython, authenticated, but the limit seems to decrease for both my accounts as I use it. Example: No authentication: $ wget http://api.twitter.com/1/account/rate_limit_status.xml -O - <?xml version="1.0" encoding="UTF-8"?> <hash> <hourly-limit type="integer">150</hourly-limit> <reset-time-in-seconds type="integer">1266968961</reset-time-in-seconds> <reset-time type="datetime">2010-02-23T23:49:21+00:00</reset-time> <remaining-hits type="integer">134</remaining-hits> </hash> Authentication with account #1: $ wget --user b... --password=youwish http://api.twitter.com/1/account/rate_limit_status.xml -O - <?xml version="1.0" encoding="UTF-8"?> <hash> <reset-time-in-seconds type="integer">1266968961</reset-time-in-seconds> <reset-time type="datetime">2010-02-23T23:49:21+00:00</reset-time> <remaining-hits type="integer">134</remaining-hits> <hourly-limit type="integer">150</hourly-limit> </hash> Authentication with account #2: $ wget --user d... --password=youwish http://api.twitter.com/1/account/rate_limit_status.xml -O - <?xml version="1.0" encoding="UTF-8"?> <hash> <reset-time type="datetime">2010-02-23T23:49:21+00:00</reset-time> <remaining-hits type="integer">134</remaining-hits> <hourly-limit type="integer">150</hourly-limit> <reset-time-in-seconds type="integer">1266968961</reset-time-in-seconds> </hash> You see how both accounts seem to have exactly the same rate limit info (134/150)? I only used one account in my app, so why do both accounts show decrease?

    Read the article

  • Params order in Foo.new(params[:foo]), need one before the other (Rails)

    - by Jeena
    I have a problem which I don't know how to fix. It has to do with the unsorted params hash. I have a object Reservation which has a virtual time= attribute and a virtual eating_session= attribute when I set the time= I also want to validate it via an external server request. I do that with help of the method times() which makes a lookup on a other server and saves all possible times in the @times variable. The problem now is that the method times() needs the eating_session attribute to find out which times are valid, but rails sometimes calls the times= method first, before there is any eating_session in the Reservation object when I just do @reservation = Reservation.new(params[:reservation]) class ReservationsController < ApplicationController def new @reservation = Reservation.new(params[:reservation]) # ... end end class Reservation < ActiveRecord::Base include SoapClient attr_accessor :date, :time belongs_to :eating_session def time=(time) @time = times.find { |t| t[:time] == time } end def times return @times if defined? @times @times = [] response = call_soap :search_availability { # eating_session is sometimes nil :session_id => eating_session.code, # <- HERE IS THE PROBLEM :dining_date => date } response[:result].each do |result| @times << { :time => "#{DateTime.parse(result[:time]).strftime("%H:%M")}", :correlation_data => result[:correlation_data] } end @times end end I have no idea how to fix this, any help is apriciated.

    Read the article

  • How do I access variable values from one view controller in another?

    - by Thomas
    Hello all, I have an integer variable (time) in one view controller whose value I need in another view controller. Here's the code: MediaMeterViewController // TRP - On Touch Down event, start the timer -(IBAction) startTimer { time = 0; // TRP - Start a timer timer = [NSTimer scheduledTimerWithTimeInterval:1 target:self selector:@selector(updateTimer) userInfo:nil repeats:YES]; [timer retain]; // TRP - Retain timer so it is not accidentally deallocated } // TRP - Method to update the timer display -(void)updateTimer { time++; // NSLog(@"Seconds: %i ", time); if (NUM_SECONDS == time) [timer invalidate]; } // TRP - On Touch Up Inside event, stop the timer, decide stress level, display results -(IBAction) btn_MediaMeterResults { [timer invalidate]; NSLog(@"Seconds: %i ", time); ResultsViewController *resultsView = [[ResultsViewController alloc] initWithNibName:@"ResultsViewController" bundle:nil]; [self.view addSubview:resultsView.view]; } And in ResultsViewController, I want to process time based on its value ResultsViewController - (void)viewDidLoad { if(time < 3) {// Do something} else if ((time > 3) && (time < 6)) {// Do something else} //etc... [super viewDidLoad]; } I'm kind of unclear on when @property and @synthesize is necessary. Is that the case in this situation? Any help would be greatly appreciated. Thanks! Thomas

    Read the article

  • Spring transaction : Transaction not active

    - by Videanu Adrian
    i develop a app using struts2, spring 3.1, Jpa2 and Hibernate. From Spring i use transactions and IoC. so, i have an ajax code block that calls for a struts2 action every second (this is happening for every user that is logged into application (simultaneous users are around 20-30 at a time)). this action name is PopupAction public class PopupAction extends VActionBase implements ServletRequestAware { private static final long serialVersionUID = -293004532677112584L; private iIntermedService intermedService; private HttpServletRequest servletRequest; @Override public String execute() { Integer agentId = (Integer) session.get("USER_AGENT_ID"); Intermed iObj; try { iObj = intermedService.getIntermed(agentId,locationsString); } catch (Exception e) { logger.error("Cannot get Intermed!!! "+e.getMessage()); return ERROR; } return SUCCESS; } } and then i have the service class : @Transactional(readOnly=true) public class IntermedServiceImpl extends GenericIService<Intermed, Integer> implements iIntermedService { @Override public Intermed getIntermed (int agentId,String queueIds) throws Exception { Intermed intermedObj = null; //TODO - find a better implementation for this queueIds parameter!!!! try{ String sql = "SELECT i FROM bla bla bla.....)"; Query q = this.em.createQuery(sql); List<Intermed> iList = q.getResultList(); if (iList.size() == 1){ intermedObj = (Intermed) iList.get(0); //get latest object from DB em.refresh(intermedObj); } }catch(Exception e){ e.printStackTrace(); logger.error(e.getCause()+e.getMessage()); throw e; } return intermedObj; } } here is the spring configuration : <bean id="emfI" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="dataSource" ref="inboundDS" /> <property name="persistenceUnitName" value="I2PU"/> <!-- GlassFish load-time weaving setup --> <property name="loadTimeWeaver"> <bean class="org.springframework.instrument.classloading.glassfish.GlassFishLoadTimeWeaver"/> </property> </bean> <tx:annotation-driven transaction-manager="txManagerI" /> <tx:advice id="txManagerInboundAdvice" transaction-manager="txManagerI"> <tx:attributes> <tx:method name="*" rollback-for="java.lang.Exception"/> </tx:attributes> </tx:advice> I have names for transactionManager because i have 3 datasources and 3 transaction managers. the problem is that my glassfish logs are full of messages like these: -- removed in order to be able to add more recent logs -- So the cause is : Caused by: java.lang.IllegalStateException: Transaction not active. But i have no idea what can cause this. Any help ? thanks Updates So i have added to @Transactional annotation the transaction manager name that he has to use, but this still does not solved my problem. I have captured a log from the time that the transaction is created until i got that exception: 2012-02-08T15:08:55.954+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractBeanFactory.java:245) - Returning cached instance of singleton bean 'txManagerVA' 2012-02-08T15:08:55.962+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:365) - Creating new transaction with name [xxx.vs.common.services.inbound.IntermedServiceImpl.getIntermed]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT,readOnly; '',-java.lang.Exception 2012-02-08T15:08:55.967+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (JpaTransactionManager.java:368) - Opened new EntityManager [org.hibernate.ejb.EntityManagerImpl@edf83f9] for JPA transaction 2012-02-08T15:08:55.976+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (JpaTransactionManager.java:400) - Exposing JPA transaction as JDBC transaction [org.springframework.orm.jpa.vendor.HibernateJpaDialect$HibernateConnectionHandle@725b979b] 2012-02-08T15:08:55.977+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:193) - Bound value [org.springframework.jdbc.datasource.ConnectionHolder@4fb57177] for key [com.sun.gjc.spi.jdbc40.DataSource40@75fa4851] to thread [thread-pool-1-80(80)] 2012-02-08T15:08:55.978+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:193) - Bound value [org.springframework.orm.jpa.EntityManagerHolder@112c6483] for key [org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean@47d4f12f] to thread [thread-pool-1-80(80)] 2012-02-08T15:08:55.979+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:272) - Initializing transaction synchronization 2012-02-08T15:08:55.980+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionAspectSupport.java:362) - Getting transaction for [xxx.vs.common.services.inbound.IntermedServiceImpl.getIntermed] 2012-02-08T15:08:55.983+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (ExtendedEntityManagerCreator.java:423) - Starting resource local transaction on application-managed EntityManager [org.hibernate.ejb.EntityManagerImpl@46d002f4] 2012-02-08T15:08:55.984+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:193) - Bound value [org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerSynchronization@797add43] for key [org.hibernate.ejb.EntityManagerImpl@46d002f4] to thread [thread-pool-1-80(80)] 2012-02-08T15:08:55.986+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (ExtendedEntityManagerCreator.java:400) - Joined local transaction 2012-02-08T15:08:55.991+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionAspectSupport.java:391) - Completing transaction for [xxx.vs.common.services.inbound.IntermedServiceImpl.getIntermed] 2012-02-08T15:08:55.992+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:922) - Triggering beforeCommit synchronization 2012-02-08T15:08:55.994+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:935) - Triggering beforeCompletion synchronization 2012-02-08T15:08:56.001+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:243) - Removed value [org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerSynchronization@797add43] for key [org.hibernate.ejb.EntityManagerImpl@46d002f4] from thread [thread-pool-1-80(80)] 2012-02-08T15:08:56.002+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:752) - Initiating transaction commit 2012-02-08T15:08:56.003+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (JpaTransactionManager.java:507) - Committing JPA transaction on EntityManager [org.hibernate.ejb.EntityManagerImpl@edf83f9] 2012-02-08T15:08:56.008+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:948) - Triggering afterCommit synchronization 2012-02-08T15:08:56.010+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:964) - Triggering afterCompletion synchronization 2012-02-08T15:08:56.011+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:331) - Clearing transaction synchronization 2012-02-08T15:08:56.012+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:243) - Removed value [org.springframework.orm.jpa.EntityManagerHolder@112c6483] for key [org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean@47d4f12f] from thread [thread-pool-1-80(80)] 2012-02-08T15:08:56.021+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:243) - Removed value [org.springframework.jdbc.datasource.ConnectionHolder@4fb57177] for key [com.sun.gjc.spi.jdbc40.DataSource40@75fa4851] from thread [thread-pool-1-80(80)] 2012-02-08T15:08:56.021+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (JpaTransactionManager.java:593) - Closing JPA EntityManager [org.hibernate.ejb.EntityManagerImpl@edf83f9] after transaction 2012-02-08T15:08:56.022+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (EntityManagerFactoryUtils.java:343) - Closing JPA EntityManager 2012-02-08T15:08:56.023+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|ERROR [thread-pool-1-80(80)] (PopupAction.java:39) - Cannot get Intermed!!! Transaction not active; nested exception is java.lang.IllegalStateException: Transaction not active 2012-02-08T15:08:56.024+0200|SEVERE||_ThreadID=184;_ThreadName=Thread-5;|org.springframework.dao.InvalidDataAccessApiUsageException: Transaction not active; nested exception is java.lang.IllegalStateException: Transaction not active at org.springframework.orm.jpa.EntityManagerFactoryUtils.convertJpaAccessExceptionIfPossible(EntityManagerFactoryUtils.java:298) at org.springframework.orm.jpa.vendor.HibernateJpaDialect.translateExceptionIfPossible(HibernateJpaDialect.java:106) at org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerSynchronization.convertException(ExtendedEntityManagerCreator.java:501) at org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerSynchronization.afterCommit(ExtendedEntityManagerCreator.java:481) at org.springframework.transaction.support.TransactionSynchronizationUtils.invokeAfterCommit(TransactionSynchronizationUtils.java:133) at org.springframework.transaction.support.TransactionSynchronizationUtils.triggerAfterCommit(TransactionSynchronizationUtils.java:121) at org.springframework.transaction.support.AbstractPlatformTransactionManager.triggerAfterCommit(AbstractPlatformTransactionManager.java:950) at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:796) at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:723) at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:393) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:120) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:202) at $Proxy325.getIntermed(Unknown Source) at xxx.vs.common.actions.PopupAction.execute(PopupAction.java:37) at sun.reflect.GeneratedMethodAccessor1581.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at com.opensymphony.xwork2.DefaultActionInvocation.invokeAction(DefaultActionInvocation.java:453) at com.opensymphony.xwork2.DefaultActionInvocation.invokeActionOnly(DefaultActionInvocation.java:292) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:255) at org.apache.struts2.interceptor.debugging.DebuggingInterceptor.intercept(DebuggingInterceptor.java:256) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.DefaultWorkflowInterceptor.doIntercept(DefaultWorkflowInterceptor.java:176) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.validator.ValidationInterceptor.doIntercept(ValidationInterceptor.java:265) at org.apache.struts2.interceptor.validation.AnnotationValidationInterceptor.doIntercept(AnnotationValidationInterceptor.java:68) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ConversionErrorInterceptor.intercept(ConversionErrorInterceptor.java:138) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:211) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:211) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.StaticParametersInterceptor.intercept(StaticParametersInterceptor.java:190) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at org.apache.struts2.interceptor.MultiselectInterceptor.intercept(MultiselectInterceptor.java:75) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at org.apache.struts2.interceptor.CheckboxInterceptor.intercept(CheckboxInterceptor.java:90) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at org.apache.struts2.interceptor.FileUploadInterceptor.intercept(FileUploadInterceptor.java:243) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ModelDrivenInterceptor.intercept(ModelDrivenInterceptor.java:100) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ScopedModelDrivenInterceptor.intercept(ScopedModelDrivenInterceptor.java:141) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ChainingInterceptor.intercept(ChainingInterceptor.java:145) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.PrepareInterceptor.doIntercept(PrepareInterceptor.java:171) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.I18nInterceptor.intercept(I18nInterceptor.java:176) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at org.apache.struts2.interceptor.ServletConfigInterceptor.intercept(ServletConfigInterceptor.java:164) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.AliasInterceptor.intercept(AliasInterceptor.java:192) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ExceptionMappingInterceptor.intercept(ExceptionMappingInterceptor.java:187) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at xxx.vs.common.utils.AuthenticationInterceptor.intercept(AuthenticationInterceptor.java:78) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.googlecode.sslplugin.interceptors.SSLInterceptor.intercept(SSLInterceptor.java:128) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at org.apache.struts2.impl.StrutsActionProxy.execute(StrutsActionProxy.java:54) at org.apache.struts2.dispatcher.Dispatcher.serviceAction(Dispatcher.java:510) at org.apache.struts2.dispatcher.ng.ExecuteOperations.executeAction(ExecuteOperations.java:77) at org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter.doFilter(StrutsPrepareAndExecuteFilter.java:91) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:217) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:655) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:595) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:98) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:91) at org.apache.catalina 2012-02-08T15:08:56.024+0200|SEVERE||_ThreadID=184;_ThreadName=Thread-5;|.core.StandardHostValve.invoke(StandardHostValve.java:162) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:330) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:174) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:828) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:725) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1019) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:225) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:679) Caused by: java.lang.IllegalStateException: Transaction not active at org.hibernate.ejb.TransactionImpl.commit(TransactionImpl.java:69) at org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerSynchronization.afterCommit(ExtendedEntityManagerCreator.java:478) ... 93 more so again..... any ideea ?

    Read the article

  • Are Large iPhone Ping Times Indicative of Application Latency?

    - by yar
    I am contemplating creating a realtime app where an iPod Touch/iPhone/iPad talks to a server-side component (which produces MIDI, and sends it onward within the host). When I ping my iPod Touch on Wifi I get huge latency (and a enormous variance, too): 64 bytes from 192.168.1.3: icmp_seq=9 ttl=64 time=38.616 ms 64 bytes from 192.168.1.3: icmp_seq=10 ttl=64 time=61.795 ms 64 bytes from 192.168.1.3: icmp_seq=11 ttl=64 time=85.162 ms 64 bytes from 192.168.1.3: icmp_seq=12 ttl=64 time=109.956 ms 64 bytes from 192.168.1.3: icmp_seq=13 ttl=64 time=31.452 ms 64 bytes from 192.168.1.3: icmp_seq=14 ttl=64 time=55.187 ms 64 bytes from 192.168.1.3: icmp_seq=15 ttl=64 time=78.531 ms 64 bytes from 192.168.1.3: icmp_seq=16 ttl=64 time=102.342 ms 64 bytes from 192.168.1.3: icmp_seq=17 ttl=64 time=25.249 ms Even if this is double what the iPhone-Host or Host-iPhone time would be, 15ms+ is too long for the app I'm considering. Is there any faster way around this (e.g., USB cable)? If not, would building the app on Android offer any other options? Traceroute reports more workable times: traceroute to 192.168.1.3 (192.168.1.3), 64 hops max, 52 byte packets 1 192.168.1.3 (192.168.1.3) 4.662 ms 3.182 ms 3.034 ms can anyone decipher this difference between ping and traceroute for me, and what they might mean for an application that needs to talk to (and from) a host?

    Read the article

  • How to update QStandartItemModel without freezing the main UI

    - by user1044002
    I'm starting to learn PyQt4 and have been stuck on something for a long time now and can't figure it out myself: Here is the concept: There is a TreeView with custom QStandartItemModel, which gets rebuild every couple of seconds, and can have a lot (hundreds at least) of entries, there also will be additional delegates for the different columns etc. It's fairly complex and the building time for even plain model, without delegates, goes up to .3 sec, which makes the TreeView to freeze. Please advice me for the best approach on solving this. I was thing of somehow building the model in different thread, and eventually sending it to the TreeView, where it would just perform setModel() with the new one, but couldn't make that work. here is some code that may illustrate the problem a bit: from PyQt4.QtCore import * from PyQt4.QtGui import * import sys, os, re, time app = QApplication(sys.argv) REFRESH = 1 class Reloader_Thread(QThread): def __init__(self, parent = None): QThread.__init__(self, parent) self.loaders = ['\\', '--', '|', '/', '--'] self.emit(SIGNAL('refresh')) def run(self): format = '|%d/%b/%Y %H:%M:%S| ' while True: self.emit(SIGNAL('refresh')) self.sleep(REFRESH) class Model(QStandardItemModel): def __init__(self, viewer=None): QStandardItemModel.__init__(self,None) self.build() def build(self): stTime = time.clock() newRows = [] for r in range(1000): row = [] for c in range(12): item = QStandardItem('%s %02d%02d' % (time.strftime('%H"%M\'%S'), r,c)) row.append(item) newRows.append(row) eTime = time.clock() - stTime outStr = 'Build %03f' % eTime format = '|%d/%b/%Y %H:%M:%S| ' stTime = time.clock() self.beginRemoveRows(QModelIndex(), 0, self.rowCount()) self.removeRows(0, self.rowCount()) self.endRemoveRows() eTime = time.clock() - stTime outStr += ', Remove %03f' % eTime stTime = time.clock() numNew = len(newRows) for r in range(numNew): self.appendRow(newRows[r]) eTime = time.clock() - stTime outStr += ', Set %03f' % eTime self.emit(SIGNAL('status'), outStr) self.reset() w = QWidget() w.setGeometry(200,200,800,600) hb = QVBoxLayout(w) tv = QTreeView() tvm = Model(tv) tv.setModel(tvm) sb = QStatusBar() reloader = Reloader_Thread() tvm.connect(tvm, SIGNAL('status'), sb.showMessage) reloader.connect(reloader, SIGNAL('refresh'), tvm.build) reloader.start() hb.addWidget(tv) hb.addWidget(sb) w.show() app.setStyle('plastique') app.processEvents(QEventLoop.AllEvents) app.aboutToQuit.connect(reloader.quit) app.exec_()

    Read the article

  • hudson.util.ProcessTreeTest test error

    - by senzacionale
    error: Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.011 sec Running hudson.util.ProcessTreeTest Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.181 sec <<< FAILURE! Running hudson.model.LoadStatisticsTest Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.089 sec Running hudson.util.ArgumentListBuilderTest Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.053 sec Running hudson.util.RobustReflectionConverterTest Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.029 sec Running hudson.util.VersionNumberTest Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.074 sec Running hudson.util.CyclicGraphDetectorTest Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.038 sec Results : Tests in error: testRemoting(hudson.util.ProcessTreeTest) Tests run: 102, Failures: 0, Errors: 1, Skipped: 0 [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] There are test failures. Please refer to D:\PROJEKTI\Maven\hudson\main\core\target\surefire-reports for the individual test results. [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 17 minutes 58 seconds [INFO] Finished at: Fri Jun 11 21:04:46 CEST 2010 [INFO] Final Memory: 85M/152M [INFO] ------------------------------------------------------------------------ error log: ------------------------------------------------------------------------------- Test set: hudson.util.ProcessTreeTest ------------------------------------------------------------------------------- Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.181 sec <<< FAILURE! testRemoting(hudson.util.ProcessTreeTest) Time elapsed: 0.169 sec <<< ERROR! org.jvnet.winp.WinpException: Failed to read environment variable table error=299 at .\envvar-cmdline.cpp:114 at org.jvnet.winp.Native.getCmdLineAndEnvVars(Native Method) at org.jvnet.winp.WinProcess.parseCmdLineAndEnvVars(WinProcess.java:114) at org.jvnet.winp.WinProcess.getEnvironmentVariables(WinProcess.java:109) at hudson.util.ProcessTree$Windows$1.getEnvironmentVariables(ProcessTree.java:419) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:274) at hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:255) at hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:215) at hudson.remoting.UserRequest.perform(UserRequest.java:114) at hudson.remoting.UserRequest.perform(UserRequest.java:48) at hudson.remoting.Request$2.run(Request.java:270) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) does anyone have any idea what can be wrong in test? Regards

    Read the article

  • Python script, runs well, but not perfectly, debugging help.

    - by S1syphus
    What it does (sort of)... or is meant to, the script reads from a csv file that contains information on sound files and create a play list exactly 60 minutes long. An example csv, contains: their title, duration (in seconds), minium total time to be played (in minutes) An example is: Soundfoo,120,10 Soundbar,30,6 Sounddev,60,20 Soundrandom,15,8 The script works out the minimum instances of plays, take 'Soundfoo' for example, the length of each sample is 120 seconds and the minimum time to be played is 10 minutes, so basic maths 10*60/120 gives the number of instances the song is to be played, in this case 5. It is meant to take minimum number of instances and spread out equally from each other; so there will never be a period where for example Soundbar is played twice in a row. Then if the minium instances of each song has been used, and there is still time with in the 60 min, how is it possible to tell it to go back and fill the time by selecting each sound and including it till the 60 min is filled while remaining sparsely populated. Heres the issue(s)! The script fails to calculate the actual time require to play all the sounds in a file and the total time of the playlist, the thing is tho it doesn't get it wrong all the time maybe 3/5 times, even if I run it on the same csv file it will give me different answers. Here is the file I shall run the script on e for sake of ease to see the issue: Sound1,60,10 Sound2,60,10 Sound3,60,10 Sound4,60,10 Sound5,60,10 Sound6,60,10 I'll do it three times and post the results: 1 Required playtime in minutes: 60 Actual time in minutes to play all required ads: 62 Total playtime in minutes: 62.0 2 Required playtime in minutes: 60 Actual time in minutes to play all required ads: 71 Total playtime in minutes: 71.0 3 Required playtime in minutes: 60 Actual time in minutes to play all required ads: 60 Total playtime in minutes: 60.0 Relevant Code: pastebin.com/demkBXk6 And finally... in context: http://pastebin.com/demkBXk6 If you made it down to here, thanks for staying and reading, kudos.

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >