Search Results

Search found 65206 results on 2609 pages for 'real time'.

Page 223/2609 | < Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >

  • What is realism?

    - by eversor
    Beyond the obvious something that seams real, realism in games is a hard feature to hit. In some cases, things that are completely impossible in real life are seen as realistic by gamers. For instance, in some FPS you can survive being hit by a fair amount of bullets when in real life one is enough, Newton-defying car drifts, etc. So, in some cases, reductions of life-like actions or consequences implies a bigger sense of realism. The root of this pseudo-philosophical question lies in: I am going to create a engine for battles in an online (browser-based) strategic game. Browser-based means that the battle would not be seen. And i do not know how to approach this realism issue.

    Read the article

  • Help to solve "Robbery Problem"

    - by peiska
    Hello, Can anybody help me with this problem in C or Java? The problem is taken from here: http://acm.pku.edu.cn/JudgeOnline/problem?id=1104 Inspector Robstop is very angry. Last night, a bank has been robbed and the robber has not been caught. And this happened already for the third time this year, even though he did everything in his power to stop the robber: as quickly as possible, all roads leading out of the city were blocked, making it impossible for the robber to escape. Then, the inspector asked all the people in the city to watch out for the robber, but the only messages he got were of the form "We don't see him." But this time, he has had enough! Inspector Robstop decides to analyze how the robber could have escaped. To do that, he asks you to write a program which takes all the information the inspector could get about the robber in order to find out where the robber has been at which time. Coincidentally, the city in which the bank was robbed has a rectangular shape. The roads leaving the city are blocked for a certain period of time t, and during that time, several observations of the form "The robber isn't in the rectangle Ri at time ti" are reported. Assuming that the robber can move at most one unit per time step, your program must try to find the exact position of the robber at each time step. Input The input contains the description of several robberies. The first line of each description consists of three numbers W, H, t (1 <= W,H,t <= 100) where W is the width, H the height of the city and t is the time during which the city is locked. The next contains a single integer n (0 <= n <= 100), the number of messages the inspector received. The next n lines (one for each of the messages) consist of five integers ti, Li, Ti, Ri, Bi each. The integer ti is the time at which the observation has been made (1 <= ti <= t), and Li, Ti, Ri, Bi are the left, top, right and bottom respectively of the (rectangular) area which has been observed. (1 <= Li <= Ri <= W, 1 <= Ti <= Bi <= H; the point (1, 1) is the upper left hand corner, and (W, H) is the lower right hand corner of the city.) The messages mean that the robber was not in the given rectangle at time ti. The input is terminated by a test case starting with W = H = t = 0. This case should not be processed. Output For each robbery, first output the line "Robbery #k:", where k is the number of the robbery. Then, there are three possibilities: If it is impossible that the robber is still in the city considering the messages, output the line "The robber has escaped." In all other cases, assume that the robber really is in the city. Output one line of the form "Time step : The robber has been at x,y." for each time step, in which the exact location can be deduced. (x and y are the column resp. row of the robber in time step .) Output these lines ordered by time . If nothing can be deduced, output the line "Nothing known." and hope that the inspector will not get even more angry. Output a blank line after each processed case.

    Read the article

  • dns queries not using nscd for caching

    - by xenoterracide
    I'm trying to use nscd (Nameservices Cache Daemon) to cache dns locally so I can stop using bind to do it. I've gotten it started and ntpd seems to attempt to use it. But everything else for hosts seems to ignore it. e.g if I do dig apache.org 3 times none of them will hit the cache. I'm viewing the cache stats using nscd -g to determine whether it's been used. I've also turned the debug log level up to see if I can see it hitting and the queries don't even hit nscd. nsswitch.conf # Begin /etc/nsswitch.conf passwd: files group: files shadow: files publickey: files hosts: cache files dns networks: files protocols: files services: files ethers: files rpc: files netgroup: files # End /etc/nsswitch.confenter code here nscd.conf # # /etc/nscd.conf # # An example Name Service Cache config file. This file is needed by nscd. # # Legal entries are: # # logfile <file> # debug-level <level> # threads <initial #threads to use> # max-threads <maximum #threads to use> # server-user <user to run server as instead of root> # server-user is ignored if nscd is started with -S parameters # stat-user <user who is allowed to request statistics> # reload-count unlimited|<number> # paranoia <yes|no> # restart-interval <time in seconds> # # enable-cache <service> <yes|no> # positive-time-to-live <service> <time in seconds> # negative-time-to-live <service> <time in seconds> # suggested-size <service> <prime number> # check-files <service> <yes|no> # persistent <service> <yes|no> # shared <service> <yes|no> # max-db-size <service> <number bytes> # auto-propagate <service> <yes|no> # # Currently supported cache names (services): passwd, group, hosts, services # logfile /var/log/nscd.log threads 4 max-threads 32 server-user nobody # stat-user somebody debug-level 9 # reload-count 5 paranoia no # restart-interval 3600 enable-cache passwd yes positive-time-to-live passwd 600 negative-time-to-live passwd 20 suggested-size passwd 211 check-files passwd yes persistent passwd yes shared passwd yes max-db-size passwd 33554432 auto-propagate passwd yes enable-cache group yes positive-time-to-live group 3600 negative-time-to-live group 60 suggested-size group 211 check-files group yes persistent group yes shared group yes max-db-size group 33554432 auto-propagate group yes enable-cache hosts yes positive-time-to-live hosts 3600 negative-time-to-live hosts 20 suggested-size hosts 211 check-files hosts yes persistent hosts yes shared hosts yes max-db-size hosts 33554432 enable-cache services yes positive-time-to-live services 28800 negative-time-to-live services 20 suggested-size services 211 check-files services yes persistent services yes shared services yes max-db-size services 33554432 resolv.conf # Generated by dhcpcd from eth0 nameserver 127.0.0.1 domain westell.com nameserver 192.168.1.1 nameserver 208.67.222.222 nameserver 208.67.220.220 as kind of a side note I'm using archlinux.

    Read the article

  • dns queries not using nscd for caching

    - by xenoterracide
    I'm trying to use nscd (Nameservices Cache Daemon) to cache dns locally so I can stop using bind to do it. I've gotten it started and ntpd seems to attempt to use it. But everything else for hosts seems to ignore it. e.g if I do dig apache.org 3 times none of them will hit the cache. I'm viewing the cache stats using nscd -g to determine whether it's been used. I've also turned the debug log level up to see if I can see it hitting and the queries don't even hit nscd. nsswitch.conf # Begin /etc/nsswitch.conf passwd: files group: files shadow: files publickey: files hosts: cache files dns networks: files protocols: files services: files ethers: files rpc: files netgroup: files # End /etc/nsswitch.confenter code here nscd.conf # # /etc/nscd.conf # # An example Name Service Cache config file. This file is needed by nscd. # # Legal entries are: # # logfile <file> # debug-level <level> # threads <initial #threads to use> # max-threads <maximum #threads to use> # server-user <user to run server as instead of root> # server-user is ignored if nscd is started with -S parameters # stat-user <user who is allowed to request statistics> # reload-count unlimited|<number> # paranoia <yes|no> # restart-interval <time in seconds> # # enable-cache <service> <yes|no> # positive-time-to-live <service> <time in seconds> # negative-time-to-live <service> <time in seconds> # suggested-size <service> <prime number> # check-files <service> <yes|no> # persistent <service> <yes|no> # shared <service> <yes|no> # max-db-size <service> <number bytes> # auto-propagate <service> <yes|no> # # Currently supported cache names (services): passwd, group, hosts, services # logfile /var/log/nscd.log threads 4 max-threads 32 server-user nobody # stat-user somebody debug-level 9 # reload-count 5 paranoia no # restart-interval 3600 enable-cache passwd yes positive-time-to-live passwd 600 negative-time-to-live passwd 20 suggested-size passwd 211 check-files passwd yes persistent passwd yes shared passwd yes max-db-size passwd 33554432 auto-propagate passwd yes enable-cache group yes positive-time-to-live group 3600 negative-time-to-live group 60 suggested-size group 211 check-files group yes persistent group yes shared group yes max-db-size group 33554432 auto-propagate group yes enable-cache hosts yes positive-time-to-live hosts 3600 negative-time-to-live hosts 20 suggested-size hosts 211 check-files hosts yes persistent hosts yes shared hosts yes max-db-size hosts 33554432 enable-cache services yes positive-time-to-live services 28800 negative-time-to-live services 20 suggested-size services 211 check-files services yes persistent services yes shared services yes max-db-size services 33554432 resolv.conf # Generated by dhcpcd from eth0 nameserver 127.0.0.1 domain westell.com nameserver 192.168.1.1 nameserver 208.67.222.222 nameserver 208.67.220.220 as kind of a side note I'm using archlinux.

    Read the article

  • SNTP, why do you mock me?!

    - by Matthew
    --- SOLVED SEE EDIT 5 --- My w2k3 pdc is configured as an authoritative time server. Other servers on the domain are able to sync with it if I manually specify it in the peer list. By if I try to sync from flags 'domhier', it wont resync; I get the error message The computer did not resync because no time data was available. I can only think that it is not querying the pdc. I also tried setting the registry as shown here (http://support.microsoft.com/kb/193825). But no luck (I have not restarted the server, I am hoping I wont have to since it is the pdc) If you would like any further information on my config, please let me know. Edit 1: I have set the w32time service config AnnouceFlags to 0x05 as documented here www.krr.org/microsoft/authoritative_time_servers.php and a number of other places. The PDC syncs to an external time source (ntp). I can get the stripchart on the client from the pdc no problems. The loginserver for the host I am trying to configure is shown as the pdc. Edit 2: The packet capture has revealed something interesting. The client is contacting the correct server, and getting a valid response but I still get the same error message. Here is the NTP excerpt from the client to the server Flags: 11.. .... = Leap Indicator: alarm condition (clock not synchronized) (3) ..01 1... = Version number: NTP Version 3 (3) .... .011 = Mode: client (3) Peer Clock Stratum: unspecified or unavailable (0) Peer Polling Interval: 10 (1024 sec) Peer Clock Precision: 0.015625 sec Root Delay: 0.0000 sec Root Dispersion: 1.0156 sec Reference Clock ID: NULL Reference Clock Update Time: Sep 1, 2010 05:29:39.8170 UTC Originate Time Stamp: NULL Receive Time Stamp: NULL Transmit Time Stamp: Nov 8, 2010 01:44:44.1450 UTC Key ID: DC080000 Here is the reply NTP excerpt from the server to the client Flags: 0x1c 00.. .... = Leap Indicator: no warning (0) ..01 1... = Version number: NTP Version 3 (3) .... .100 = Mode: server (4) Peer Clock Stratum: secondary reference (3) Peer Polling Interval: 10 (1024 sec) Peer Clock Precision: 0.00001 sec Root Delay: 0.1484 sec Root Dispersion: 0.1060 sec Reference Clock ID: 192.189.54.17 Reference Clock Update Time: Nov 8,2010 01:18:04.6223 UTC Originate Time Stamp: Nov 8, 2010 01:44:44.1450 UTC Receive Time Stamp: Nov 8, 2010 01:46:44.1975 UTC Transmit Time Stamp: Nov 8, 2010 01:46:44.1975 UTC Key ID: 00000000 Edit 3: dumpreg for paramters on pdc Value Name Value Type Value Data ------------------------------------------------------------------------ ServiceMain REG_SZ SvchostEntry_W32Time ServiceDll REG_EXPAND_SZ C:\WINDOWS\system32\w32time.dll NtpServer REG_SZ bhvmmgt01.domain.com,0x1 Type REG_SZ AllSync and config Value Name Value Type Value Data -------------------------------------------------------------------------- LastClockRate REG_DWORD 156249 MinClockRate REG_DWORD 155860 MaxClockRate REG_DWORD 156640 FrequencyCorrectRate REG_DWORD 4 PollAdjustFactor REG_DWORD 5 LargePhaseOffset REG_DWORD 50000000 SpikeWatchPeriod REG_DWORD 900 HoldPeriod REG_DWORD 5 LocalClockDispersion REG_DWORD 10 EventLogFlags REG_DWORD 2 PhaseCorrectRate REG_DWORD 7 MinPollInterval REG_DWORD 6 MaxPollInterval REG_DWORD 10 UpdateInterval REG_DWORD 100 MaxNegPhaseCorrection REG_DWORD -1 MaxPosPhaseCorrection REG_DWORD -1 AnnounceFlags REG_DWORD 5 MaxAllowedPhaseOffset REG_DWORD 300 FileLogSize REG_DWORD 10000000 FileLogName REG_SZ C:\Windows\Temp\w32time.log FileLogEntries REG_SZ 0-300 Edit 4: Here are some notables from the ntp log file on the pdc. ReadConfig: failed. Use default one 'TimeJumpAuditOffset'=0x00007080 DomainHierachy: we are now the domain root. ClockDispln: we're a reliable time service with no time source: LS: 0, TN: 864000000000, WAIT: 86400000 Edit 5: F&^%ING SOLVED! Ok so I was reading about people with similar problems, some mentioned w32time server settings applied by GPO, but I tested this early on and there were no settings applied to this service by gpo. Others said that the reporting software may not be picking up some old gpo settings applied. So I searched the registry for all w32time instaces. I came across an interesting key that indicated there may be some other ntp software running on the server. Sure enough, I look through the installed software list and there the little F*&%ER is. Uninstalled and now working like a dream. FFFFFFFUUUUUUUUUUUU

    Read the article

  • How to save a ntfs partition which suddenly became empty

    - by SteveO
    One ntfs partition of my laptop was suddenly wiped out without any notice to me, when I rebooted from Windows 7 to Ubuntu 12.04 today. I am in need of help to save my files on that partition, which are important and unfortunately haven't been backed up yet. My laptop has two operating systems: Windows 7 and Ubuntu 12.04. with a ntfs partition shared between the two operating systems for storing some data files (109GB, about 97%of which has been used). I have almost always been using Ubuntu, but today I happened to have to work under Windows. Following is a record of what happened in the time order, numbering according to which operating system I was in at each stage. When I started into Windows 7, right before being able to log in, it took a while and two reboots to configure the Windows. I thought it was normal, since last time when I was using Windows two weeks ago, it took very long and several reboots to update Windows, since the last time I used Windows before then was in November last year. Then after finally being able to log in Windows 7, I installed Libre Office, MathType (I got it from http://dl.portablesoft.org/down/?id=2515, which I originally thought was a trial version, but later I learned was a cracked version and felt wrong. I made a copy of it at dropbox http://dl.dropbox.com/u/13029929/MathType_6.8_PortableSoft.rar, not for distributing it but to list it there just in case it will help to identify the problem), and MikTex. I then edited some .doc files in the ntfs partition under both Microsoft Office with MathType, and Libre Office. When I finished working under Windows and rebooted into Ubuntu, Ubuntu did some filesystem checking and reported that the ntfs partition was not able to be mounted. Then I rebooted again into Windows, and found that the ntfs partition had been emptied, i.e. all the data files were gone, and only one system file bootsqm.dat and one system directory System Volume Information were there, with their last updated time being the time when I first rebooted from Windows to Ubuntu (in fact, it is 4 hours in advanced than the actual time of that rebooting , see immediately below) Also I noticed that the time shown by Windows is not correct for my time zone (UTC-05:00) Eastern Time (US & Canada)), which is 4 hours in advance than the correct time (my current time is 3am, but the computer shows 7am). Same things happened when I rebooted into Ubuntu again: the ntfs has been emptied and left with only one Windows system file bootsqm.dat and one Windows system directory System Volume Information. the time shown by Ubuntu is 4 hours in advance than the correct time. I wonder what I can do to retrieve my data files back on the ntfs partition? If I am not able to do it myself, will some professionals be able to help me out? Thanks a lot! PS: I didn't think I did any thing that required emptying that partition. But there were quite some works I did during that stage right before the reboot from Windows to Ubuntu when the problem occured. Did I make any mis-operation?

    Read the article

  • svchost.exe crash on wake up

    - by Serge
    Lately whenever I wake up my laptop from sleep I get a series of errors (generated by a host process failing) I haven't been able to figure out why this happens but I know which host process fails and was wondering if someone had some insight on why this keeps occuring 99% of the time when my laptop wakes up. here's the host process error Faulting application svchost.exe_SysMain, version 6.0.6001.18000, time stamp 0x47919291, faulting module ntdll.dll, version 6.0.6002.18005, time stamp 0x49e0421d, exception code 0xc0000006, fault offset 0x000000000005a02d, process id 0x1738, application start time 0x01cae656279b1010. and here are some services that fail because of that host The Windows Audio Endpoint Builder service terminated unexpectedly. It has done this 1 time(s). The following corrective action will be taken in 60000 milliseconds: Restart the service. The Wired AutoConfig service terminated unexpectedly. It has done this 1 time(s). The following corrective action will be taken in 0 milliseconds: Restart the service. The ReadyBoost service terminated unexpectedly. It has done this 2 time(s). The following corrective action will be taken in 60000 milliseconds: Restart the service. The Human Interface Device Access service terminated unexpectedly. It has done this 1 time(s). The following corrective action will be taken in 120000 milliseconds: Restart the service. The Network Connections service terminated unexpectedly. It has done this 2 time(s). The following corrective action will be taken in 100 milliseconds: Restart the service. The Program Compatibility Assistant Service service terminated unexpectedly. It has done this 2 time(s). The following corrective action will be taken in 60000 milliseconds: Restart the service. The Superfetch service terminated unexpectedly. It has done this 2 time(s). The following corrective action will be taken in 60000 milliseconds: Restart the service. Anyways I think you get the point, there are a few more. It got really annoying to wait for those services to restart so I created a batch file that does it automatically whenever the wlan stops I'm using Vista x64 on a Studio XPS 1640

    Read the article

  • Looking for a NTP Server Software for Windows

    - by Simon
    I'm looking for a, preferably free, NTP Server for Windows Server 2003/2008. We have already tried the built in Windows Time Server, but our tests did show that it is not very accurate, we see time differences up to 500ms. The max time difference we can allow for our application is ~100ms. Now we have already used the Meinberg NTPd for Windows. It works great except we have one big issue with it: If there is a network connection problem between the client and server, the ntp server is in a panic state It won't give the client a new time until we restart the ntp service. This is a big issue which has caused us some trouble. It was working fine for months until there was a network problem we didn't notice, we only noticed it after a week when the time difference was already 30 sec. on the clients. So please suggest some alternative NTP Server for windows. I did Google but I get a lot of unrelated search results. Edit: So far the ntpd windows version was very accurate and I'd like to stick with it. The only problem is the "panic state" after a network disconnect. Maybe some knows here what the cause of this is and how to fix it. Also, I forgot to mention that we have a server/client setup like this: Server1 -- Server2 -- Server3 -- Client1 -- Client2 -- Client3 So Server2 gets its time from Server1, Server3 gets its time from Server2, and the Clients get their time from Server3. Also, there are clients connected directly to Server2. It is important that all Servers and Clients have the exact same time (within ~100ms) Now there was a network problem with Server3 and its clients. The servers run the ntpd port for Windows, which acts as NTP server and client. The clients have Dimension4 as NTP client. After the network problem, the error message in D4 was something like this (out the top of my head, don't have the exact error message): Server response: The server is in a panic state (could not sync clock) I read through the ntpd docs, and the only mention of "panic" is when the time difference is 10000 seconds which will cause to exit the ntpd server but this was not the case. Also there is a "-g" command line switch to disable the panic exit, but it is already set by default. Any ideas what could cause the panic state and how to get rid of it next time?

    Read the article

  • I am unable to connect to my netbook from any machine on my network until the netbook has pinged it

    - by Samuel Husky
    I have a rather strange issue with my netbook on my local network. When trying to connect to it in any way from a remote system it does not appear to find it. However if I get the netbook to ping the machine trying to connect it mystically appears to work. Below is the ping test from my main PC to the netbook. C:\Users\Sam>ping 192.168.8.102 Pinging 192.168.8.102 with 32 bytes of data: Reply from 192.168.8.100: Destination host unreachable. Reply from 192.168.8.100: Destination host unreachable. Reply from 192.168.8.100: Destination host unreachable. Reply from 192.168.8.100: Destination host unreachable. Ping statistics for 192.168.8.102: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Now a ping from the netbook to my main PC sam@malamute ~ $ ping 192.168.8.100 PING 192.168.8.100 (192.168.8.100) 56(84) bytes of data. 64 bytes from 192.168.8.100: icmp_req=1 ttl=128 time=2.46 ms 64 bytes from 192.168.8.100: icmp_req=2 ttl=128 time=0.835 ms 64 bytes from 192.168.8.100: icmp_req=3 ttl=128 time=1.60 ms 64 bytes from 192.168.8.100: icmp_req=4 ttl=128 time=1.32 ms 64 bytes from 192.168.8.100: icmp_req=5 ttl=128 time=1.34 ms ^C --- 192.168.8.100 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4004ms rtt min/avg/max/mdev = 0.835/1.514/2.460/0.536 ms And the same ping again from the main PC after the netbook has made a connection to it C:\Users\Sam>ping 192.168.8.102 Pinging 192.168.8.102 with 32 bytes of data: Reply from 192.168.8.102: bytes=32 time=1ms TTL=64 Reply from 192.168.8.102: bytes=32 time=1ms TTL=64 Reply from 192.168.8.102: bytes=32 time=1ms TTL=64 Reply from 192.168.8.102: bytes=32 time=1ms TTL=64 Ping statistics for 192.168.8.102: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 1ms, Maximum = 1ms, Average = 1ms The netbook is running Gentoo and is currently connected via wireless. My main PC is running Windows 7 however I get the same result no matter what PC I use on this network. Please see this example from a CentOS machine on the same network [root@tiger ~]# ping 192.168.8.102 PING 192.168.8.102 (192.168.8.102) 56(84) bytes of data. From 192.168.8.200 icmp_seq=2 Destination Host Unreachable From 192.168.8.200 icmp_seq=3 Destination Host Unreachable From 192.168.8.200 icmp_seq=4 Destination Host Unreachable --- 192.168.8.102 ping statistics --- 6 packets transmitted, 0 received, +3 errors, 100% packet loss, time 5000ms , pipe 3 If you need any more information or require logs or config files please let me know and any assistance is greatly appreciated. Additional info: No responses on TCP dump from the netbook. Same result when booting into Ubuntu from a USB key. No issue when using a wired Ethernet connection.

    Read the article

  • Why is this giving me 2 different sets of timezones?

    - by chobo2
    Hi I have this line to get all the timezones Dictionary<string, TimeZoneInfo> storeZoneName = TimeZoneInfo.GetSystemTimeZones().ToDictionary(z => z.DisplayName); Now when I upload I try it on my local machine I get this (UTC-12:00) International Date Line West (UTC-11:00) Coordinated Universal Time-11 (UTC-11:00) Samoa (UTC-10:00) Hawaii (UTC-09:00) Alaska (UTC-08:00) Baja California (UTC-08:00) Pacific Time (US & Canada) (UTC-07:00) Arizona (UTC-07:00) Chihuahua, La Paz, Mazatlan (UTC-07:00) Mountain Time (US & Canada) (UTC-06:00) Central America (UTC-06:00) Central Time (US & Canada) (UTC-06:00) Guadalajara, Mexico City, Monterrey (UTC-06:00) Saskatchewan (UTC-05:00) Bogota, Lima, Quito (UTC-05:00) Eastern Time (US & Canada) (UTC-05:00) Indiana (East) (UTC-04:30) Caracas (UTC-04:00) Asuncion (UTC-04:00) Atlantic Time (Canada) (UTC-04:00) Cuiaba (UTC-04:00) Georgetown, La Paz, Manaus, San Juan (UTC-04:00) Santiago (UTC-03:30) Newfoundland (UTC-03:00) Brasilia (UTC-03:00) Buenos Aires (UTC-03:00) Cayenne, Fortaleza (UTC-03:00) Greenland (UTC-03:00) Montevideo (UTC-02:00) Coordinated Universal Time-02 (UTC-02:00) Mid-Atlantic (UTC-01:00) Azores (UTC-01:00) Cape Verde Is. (UTC) Casablanca (UTC) Coordinated Universal Time (UTC) Dublin, Edinburgh, Lisbon, London (UTC) Monrovia, Reykjavik (UTC+01:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna (UTC+01:00) Belgrade, Bratislava, Budapest, Ljubljana, Prague (UTC+01:00) Brussels, Copenhagen, Madrid, Paris (UTC+01:00) Sarajevo, Skopje, Warsaw, Zagreb (UTC+01:00) West Central Africa (UTC+02:00) Amman (UTC+02:00) Athens, Bucharest, Istanbul (UTC+02:00) Beirut (UTC+02:00) Cairo (UTC+02:00) Harare, Pretoria (UTC+02:00) Helsinki, Kyiv, Riga, Sofia, Tallinn, Vilnius (UTC+02:00) Jerusalem (UTC+02:00) Minsk (UTC+02:00) Windhoek (UTC+03:00) Baghdad (UTC+03:00) Kuwait, Riyadh (UTC+03:00) Moscow, St. Petersburg, Volgograd (UTC+03:00) Nairobi (UTC+03:30) Tehran (UTC+04:00) Abu Dhabi, Muscat (UTC+04:00) Baku (UTC+04:00) Port Louis (UTC+04:00) Tbilisi (UTC+04:00) Yerevan (UTC+04:30) Kabul (UTC+05:00) Ekaterinburg (UTC+05:00) Islamabad, Karachi (UTC+05:00) Tashkent (UTC+05:30) Chennai, Kolkata, Mumbai, New Delhi (UTC+05:30) Sri Jayawardenepura (UTC+05:45) Kathmandu (UTC+06:00) Astana (UTC+06:00) Dhaka (UTC+06:00) Novosibirsk (UTC+06:30) Yangon (Rangoon) (UTC+07:00) Bangkok, Hanoi, Jakarta (UTC+07:00) Krasnoyarsk (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi (UTC+08:00) Irkutsk (UTC+08:00) Kuala Lumpur, Singapore (UTC+08:00) Perth (UTC+08:00) Taipei (UTC+08:00) Ulaanbaatar (UTC+09:00) Osaka, Sapporo, Tokyo (UTC+09:00) Seoul (UTC+09:00) Yakutsk (UTC+09:30) Adelaide (UTC+09:30) Darwin (UTC+10:00) Brisbane (UTC+10:00) Canberra, Melbourne, Sydney (UTC+10:00) Guam, Port Moresby (UTC+10:00) Hobart (UTC+10:00) Vladivostok (UTC+11:00) Magadan, Solomon Is., New Caledonia (UTC+12:00) Auckland, Wellington (UTC+12:00) Coordinated Universal Time+12 (UTC+12:00) Fiji (UTC+12:00) Petropavlovsk-Kamchatsky (UTC+13:00) Nuku'alofa When I run it on a different local machine or my server I have this. <option value="(GMT) Casablanca">(GMT) Casablanca</option> <option value="(GMT) Greenwich Mean Time : Dublin, Edinburgh, Lisbon, London">(GMT) Greenwich Mean Time : Dublin, Edinburgh, Lisbon, London</option> <option value="(GMT) Monrovia, Reykjavik">(GMT) Monrovia, Reykjavik</option> <option value="(GMT+01:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna">(GMT+01:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna</option> <option value="(GMT+01:00) Belgrade, Bratislava, Budapest, Ljubljana, Prague">(GMT+01:00) Belgrade, Bratislava, Budapest, Ljubljana, Prague</option> <option value="(GMT+01:00) Brussels, Copenhagen, Madrid, Paris">(GMT+01:00) Brussels, Copenhagen, Madrid, Paris</option> <option value="(GMT+01:00) Sarajevo, Skopje, Warsaw, Zagreb">(GMT+01:00) Sarajevo, Skopje, Warsaw, Zagreb</option> <option value="(GMT+01:00) West Central Africa">(GMT+01:00) West Central Africa</option> <option value="(GMT+02:00) Amman">(GMT+02:00) Amman</option> <option value="(GMT+02:00) Athens, Bucharest, Istanbul">(GMT+02:00) Athens, Bucharest, Istanbul</option> <option value="(GMT+02:00) Beirut">(GMT+02:00) Beirut</option> <option value="(GMT+02:00) Cairo">(GMT+02:00) Cairo</option> <option value="(GMT+02:00) Harare, Pretoria">(GMT+02:00) Harare, Pretoria</option> <option value="(GMT+02:00) Helsinki, Kyiv, Riga, Sofia, Tallinn, Vilnius">(GMT+02:00) Helsinki, Kyiv, Riga, Sofia, Tallinn, Vilnius</option> <option value="(GMT+02:00) Jerusalem">(GMT+02:00) Jerusalem</option> <option value="(GMT+02:00) Minsk">(GMT+02:00) Minsk</option> <option value="(GMT+02:00) Windhoek">(GMT+02:00) Windhoek</option> <option value="(GMT+03:00) Baghdad">(GMT+03:00) Baghdad</option> <option value="(GMT+03:00) Kuwait, Riyadh">(GMT+03:00) Kuwait, Riyadh</option> <option value="(GMT+03:00) Moscow, St. Petersburg, Volgograd">(GMT+03:00) Moscow, St. Petersburg, Volgograd</option> <option value="(GMT+03:00) Nairobi">(GMT+03:00) Nairobi</option> <option value="(GMT+03:00) Tbilisi">(GMT+03:00) Tbilisi</option> <option value="(GMT+03:30) Tehran">(GMT+03:30) Tehran</option> <option value="(GMT+04:00) Abu Dhabi, Muscat">(GMT+04:00) Abu Dhabi, Muscat</option> <option value="(GMT+04:00) Baku">(GMT+04:00) Baku</option> <option value="(GMT+04:00) Port Louis">(GMT+04:00) Port Louis</option> <option value="(GMT+04:00) Yerevan">(GMT+04:00) Yerevan</option> <option value="(GMT+04:30) Kabul">(GMT+04:30) Kabul</option> <option value="(GMT+05:00) Ekaterinburg">(GMT+05:00) Ekaterinburg</option> <option value="(GMT+05:00) Islamabad, Karachi">(GMT+05:00) Islamabad, Karachi</option> <option value="(GMT+05:00) Tashkent">(GMT+05:00) Tashkent</option> <option value="(GMT+05:30) Chennai, Kolkata, Mumbai, New Delhi">(GMT+05:30) Chennai, Kolkata, Mumbai, New Delhi</option> <option value="(GMT+05:30) Sri Jayawardenepura">(GMT+05:30) Sri Jayawardenepura</option> <option value="(GMT+05:45) Kathmandu">(GMT+05:45) Kathmandu</option> <option value="(GMT+06:00) Almaty, Novosibirsk">(GMT+06:00) Almaty, Novosibirsk</option> <option value="(GMT+06:00) Astana, Dhaka">(GMT+06:00) Astana, Dhaka</option> <option value="(GMT+06:30) Yangon (Rangoon)">(GMT+06:30) Yangon (Rangoon)</option> <option value="(GMT+07:00) Bangkok, Hanoi, Jakarta">(GMT+07:00) Bangkok, Hanoi, Jakarta</option> <option value="(GMT+07:00) Krasnoyarsk">(GMT+07:00) Krasnoyarsk</option> <option value="(GMT+08:00) Beijing, Chongqing, Hong Kong, Urumqi">(GMT+08:00) Beijing, Chongqing, Hong Kong, Urumqi</option> <option value="(GMT+08:00) Irkutsk, Ulaan Bataar">(GMT+08:00) Irkutsk, Ulaan Bataar</option> <option value="(GMT+08:00) Kuala Lumpur, Singapore">(GMT+08:00) Kuala Lumpur, Singapore</option> <option value="(GMT+08:00) Perth">(GMT+08:00) Perth</option> <option value="(GMT+08:00) Taipei">(GMT+08:00) Taipei</option> <option value="(GMT+09:00) Osaka, Sapporo, Tokyo">(GMT+09:00) Osaka, Sapporo, Tokyo</option> <option value="(GMT+09:00) Seoul">(GMT+09:00) Seoul</option> <option value="(GMT+09:00) Yakutsk">(GMT+09:00) Yakutsk</option> <option value="(GMT+09:30) Adelaide">(GMT+09:30) Adelaide</option> <option value="(GMT+09:30) Darwin">(GMT+09:30) Darwin</option> <option value="(GMT+10:00) Brisbane">(GMT+10:00) Brisbane</option> <option value="(GMT+10:00) Canberra, Melbourne, Sydney">(GMT+10:00) Canberra, Melbourne, Sydney</option> <option value="(GMT+10:00) Guam, Port Moresby">(GMT+10:00) Guam, Port Moresby</option> <option value="(GMT+10:00) Hobart">(GMT+10:00) Hobart</option> <option value="(GMT+10:00) Vladivostok">(GMT+10:00) Vladivostok</option> <option value="(GMT+11:00) Magadan, Solomon Is., New Caledonia">(GMT+11:00) Magadan, Solomon Is., New Caledonia</option> <option value="(GMT+12:00) Auckland, Wellington">(GMT+12:00) Auckland, Wellington</option> <option value="(GMT+12:00) Fiji, Kamchatka, Marshall Is.">(GMT+12:00) Fiji, Kamchatka, Marshall Is.</option> <option value="(GMT+13:00) Nuku'alofa">(GMT+13:00) Nuku'alofa</option> <option value="(GMT-01:00) Azores">(GMT-01:00) Azores</option> <option value="(GMT-01:00) Cape Verde Is.">(GMT-01:00) Cape Verde Is.</option> <option value="(GMT-02:00) Mid-Atlantic">(GMT-02:00) Mid-Atlantic</option> <option value="(GMT-03:00) Brasilia">(GMT-03:00) Brasilia</option> <option value="(GMT-03:00) Buenos Aires">(GMT-03:00) Buenos Aires</option> <option value="(GMT-03:00) Georgetown">(GMT-03:00) Georgetown</option> <option value="(GMT-03:00) Greenland">(GMT-03:00) Greenland</option> <option value="(GMT-03:00) Montevideo">(GMT-03:00) Montevideo</option> <option value="(GMT-03:30) Newfoundland">(GMT-03:30) Newfoundland</option> <option value="(GMT-04:00) Atlantic Time (Canada)">(GMT-04:00) Atlantic Time (Canada)</option> <option value="(GMT-04:00) La Paz">(GMT-04:00) La Paz</option> <option value="(GMT-04:00) Manaus">(GMT-04:00) Manaus</option> <option value="(GMT-04:00) Santiago">(GMT-04:00) Santiago</option> <option value="(GMT-04:30) Caracas">(GMT-04:30) Caracas</option> <option value="(GMT-05:00) Bogota, Lima, Quito, Rio Branco">(GMT-05:00) Bogota, Lima, Quito, Rio Branco</option> <option value="(GMT-05:00) Eastern Time (US &amp; Canada)">(GMT-05:00) Eastern Time (US &amp; Canada)</option> <option value="(GMT-05:00) Indiana (East)">(GMT-05:00) Indiana (East)</option> <option value="(GMT-06:00) Central America">(GMT-06:00) Central America</option> <option value="(GMT-06:00) Central Time (US &amp; Canada)">(GMT-06:00) Central Time (US &amp; Canada)</option> <option value="(GMT-06:00) Guadalajara, Mexico City, Monterrey">(GMT-06:00) Guadalajara, Mexico City, Monterrey</option> <option value="(GMT-06:00) Saskatchewan">(GMT-06:00) Saskatchewan</option> <option value="(GMT-07:00) Arizona">(GMT-07:00) Arizona</option> <option value="(GMT-07:00) Chihuahua, La Paz, Mazatlan">(GMT-07:00) Chihuahua, La Paz, Mazatlan</option> <option value="(GMT-07:00) Mountain Time (US &amp; Canada)">(GMT-07:00) Mountain Time (US &amp; Canada)</option> <option value="(GMT-08:00) Pacific Time (US &amp; Canada)">(GMT-08:00) Pacific Time (US &amp; Canada)</option> <option value="(GMT-08:00) Tijuana, Baja California">(GMT-08:00) Tijuana, Baja California</option> <option value="(GMT-09:00) Alaska">(GMT-09:00) Alaska</option> <option value="(GMT-10:00) Hawaii">(GMT-10:00) Hawaii</option> <option value="(GMT-11:00) Midway Island, Samoa">(GMT-11:00) Midway Island, Samoa</option> <option value="(GMT-12:00) International Date Line West">(GMT-12:00) International Date Line West</option> They are different. Same line of code but one is GMT and one is UTC. How can I force it to be always the same? Also I want to have a default choice of "UTC" but I am not sure what the diff is between this (UTC-11:00) Coordinated Universal Time-11 and this (UTC-02:00) Coordinated Universal Time-02

    Read the article

  • Oracle Flashback Technologies - Overview

    - by Sridhar_R-Oracle
    Oracle Flashback Technologies - IntroductionIn his May 29th 2014 blog, my colleague Joe Meeks introduced Oracle Maximum Availability Architecture (MAA) and discussed both planned and unplanned outages. Let’s take a closer look at unplanned outages. These can be caused by physical failures (e.g., server, storage, network, file deletion, physical corruption, site failures) or by logical failures – cases where all components and files are physically available, but data is incorrect or corrupt. These logical failures are usually caused by human errors or application logic errors. This blog series focuses on these logical errors – what causes them and how to address and recover from them using Oracle Database Flashback. In this introductory blog post, I’ll provide an overview of the Oracle Database Flashback technologies and will discuss the features in detail in future blog posts. Let’s get started. We are all human beings (unless a machine is reading this), and making mistakes is a part of what we do…often what we do best!  We “fat finger”, we spill drinks on keyboards, unplug the wrong cables, etc.  In addition, many of us, in our lives as DBAs or developers, must have observed, caused, or corrected one or more of the following unpleasant events: Accidentally updated a table with wrong values !! Performed a batch update that went wrong - due to logical errors in the code !! Dropped a table !! How do DBAs typically recover from these types of errors? First, data needs to be restored and recovered to the point-in-time when the error occurred (incomplete or point-in-time recovery).  Moreover, depending on the type of fault, it’s possible that some services – or even the entire database – would have to be taken down during the recovery process.Apart from error conditions, there are other questions that need to be addressed as part of the investigation. For example, what did the data look like in the morning, prior to the error? What were the various changes to the row(s) between two timestamps? Who performed the transaction and how can it be reversed?  Oracle Database includes built-in Flashback technologies, with features that address these challenges and questions, and enable you to perform faster, easier, and convenient recovery from logical corruptions. HistoryFlashback Query, the first Flashback Technology, was introduced in Oracle 9i. It provides a simple, powerful and completely non-disruptive mechanism for data verification and recovery from logical errors, and enables users to view the state of data at a previous point in time.Flashback Technologies were further enhanced in Oracle 10g, to provide fast, easy recovery at the database, table, row, and even at a transaction level.Oracle Database 11g introduced an innovative method to manage and query long-term historical data with Flashback Data Archive. The 11g release also introduced Flashback Transaction, which provides an easy, one-step operation to back out a transaction. Oracle Database versions 11.2.0.2 and beyond further enhanced the performance of these features. Note that all the features listed here work without requiring any kind of restore operation.In addition, Flashback features are fully supported with the new multi-tenant capabilities introduced with Oracle Database 12c, Flashback Features Oracle Flashback Database enables point-in-time-recovery of the entire database without requiring a traditional restore and recovery operation. It rewinds the entire database to a specified point in time in the past by undoing all the changes that were made since that time.Oracle Flashback Table enables an entire table or a set of tables to be recovered to a point in time in the past.Oracle Flashback Drop enables accidentally dropped tables and all dependent objects to be restored.Oracle Flashback Query enables data to be viewed at a point-in-time in the past. This feature can be used to view and reconstruct data that was lost due to unintentional change(s) or deletion(s). This feature can also be used to build self-service error correction into applications, empowering end-users to undo and correct their errors.Oracle Flashback Version Query offers the ability to query the historical changes to data between two points in time or system change numbers (SCN) Oracle Flashback Transaction Query enables changes to be examined at the transaction level. This capability can be used to diagnose problems, perform analysis, audit transactions, and even revert the transaction by undoing SQLOracle Flashback Transaction is a procedure used to back-out a transaction and its dependent transactions.Flashback technologies eliminate the need for a traditional restore and recovery process to fix logical corruptions or make enquiries. Using these technologies, you can recover from the error in the same amount of time it took to generate the error. All the Flashback features can be accessed either via SQL command line (or) via Enterprise Manager.  Most of the Flashback technologies depend on the available UNDO to retrieve older data. The following table describes the various Flashback technologies: their purpose, dependencies and situations where each individual technology can be used.   Example Syntax Error investigation related:The purpose is to investigate what went wrong and what the values were at certain points in timeFlashback Queries  ( select .. as of SCN | Timestamp )   - Helps to see the value of a row/set of rows at a point in timeFlashback Version Queries  ( select .. versions between SCN | Timestamp and SCN | Timestamp)  - Helps determine how the value evolved between certain SCNs or between timestamps Flashback Transaction Queries (select .. XID=)   - Helps to understand how the transaction caused the changes.Error correction related:The purpose is to fix the error and correct the problems,Flashback Table  (flashback table .. to SCN | Timestamp)  - To rewind the table to a particular timestamp or SCN to reverse unwanted updates Flashback Drop (flashback table ..  to before drop )  - To undrop or undelete a table Flashback Database (flashback database to SCN  | Restore Point )  - This is the rewind button for Oracle databases. You can revert the entire database to a particular point in time. It is a fast way to perform a PITR (point-in-time recovery). Flashback Transaction (DBMS_FLASHBACK.TRANSACTION_BACKOUT(XID..))  - To reverse a transaction and its related transactions Advanced use cases Flashback technology is integrated into Oracle Recovery Manager (RMAN) and Oracle Data Guard. So, apart from the basic use cases mentioned above, the following use cases are addressed using Oracle Flashback. Block Media recovery by RMAN - to perform block level recovery Snapshot Standby - where the standby is temporarily converted to a read/write environment for testing, backup, or migration purposes Re-instate old primary in a Data Guard environment – this avoids the need to restore an old backup and perform a recovery to make it a new standby. Guaranteed Restore Points - to bring back the entire database to an older point-in-time in a guaranteed way. and so on..I hope this introductory overview helps you understand how Flashback features can be used to investigate and recover from logical errors.  As mentioned earlier, I will take a deeper-dive into to some of the critical Flashback features in my upcoming blogs and address common use cases.

    Read the article

  • Profiling Startup Of VS2012 &ndash; SpeedTrace Profiler

    - by Alois Kraus
    SpeedTrace is a relatively unknown profiler made a company called Ipcas. A single professional license does cost 449€+VAT. For the test I did use SpeedTrace 4.5 which is currently Beta. Although it is cheaper than dotTrace it has by far the most options to influence how profiling does work. First you need to create a tracing project which does configure tracing for one process type. You can start the application directly from the profiler or (much more interesting) it does attach to a specific process when it is started. For this you need to check “Trace the specified …” radio button and enter the process name in the “Process Name of the Trace” edit box. You can even selectively enable tracing for processes with a specific command line. Then you need to activate the trace project by pressing the Activate Project button and you are ready to start VS as usual. If you want to profile the next 10 VS instances that you start you can set the Number of Processes counter to e.g. 10. This is immensely helpful if you are trying to profile only the next 5 started processes. As you can see there are many more tabs which do allow to influence tracing in a much more sophisticated way. SpeedTrace is the only profiler which does not rely entirely on the profiling Api of .NET. Instead it does modify the IL code (instrumentation on the fly) to write tracing information to disc which can later be analyzed. This approach is not only very fast but it does give you unprecedented analysis capabilities. Once the traces are collected they do show up in your workspace where you can open the trace viewer. I do skip the other windows because this view is by far the most useful one. You can sort the methods not only by Wall Clock time but also by CPU consumption and wait time which none of the other products support in their views at the same time. If you want to optimize for CPU consumption sort by CPU time. If you want to find out where most time is spent you need Clock Total time and Clock Waiting. There you can directly see if the method did take long because it did wait on something or it did really execute stuff that did take so long. Once you have found a method you want to drill deeper you can double click on a method to get to the Caller/Callee view which is similar to the JetBrains Method Grid view. But this time you do see much more. In the middle is the clicked method. Above are the methods that call you and below are the methods that you do directly call. Normally you would then start digging deeper to find the end of the chain where the slow method worth optimizing is located. But there is a shortcut. You can press the magic   button to calculate the aggregation of all called methods. This is displayed in the lower left window where you can see each method call and how long it did take. There you can also sort to see if this call stack does only contain methods (e.g. WCF connect calls which you cannot make faster) not worth optimizing. YourKit has a similar feature where it is called Callees List. In the Functions tab you have in the context menu also many other useful analysis options One really outstanding feature is the View Call History Drilldown. When you select this one you get not a sum of all method invocations but a list with the duration of each method call. This is not surprising since SpeedTrace does use tracing to get its timings. There you can get many useful graphs how this method did behave over time. Did it become slower at some point in time or was only the first call slow? The diagrams and the list will tell you that. That is all fine but what should I do when one method call was slow? I want to see from where it was coming from. No problem select the method in the list hit F10 and you get the call stack. This is a life saver if you e.g. search for serialization problems. Today Serializers are used everywhere. You want to find out from where the 5s XmlSerializer.Deserialize call did come from? Hit F10 and you get the call stack which did invoke the 5s Deserialize call. The CPU timeline tab is also useful to find out where long pauses or excessive CPU consumption did happen. Click in the graph to get the Thread Stacks window where you can get a quick overview what all threads were doing at this time. This does look like the Stack Traces feature in YourKit. Only this time you get the last called method first which helps to quickly see what all threads were executing at this moment. YourKit does generate a rather long list which can be hard to go through when you have many threads. The thread list in the middle does not give you call stacks or anything like that but you see which methods were found most often executing code by the profiler which is a good indication for methods consuming most CPU time. This does sound too good to be true? I have not told you the best part yet. The best thing about this profiler is the staff behind it. When I do see a crash or some other odd behavior I send a mail to Ipcas and I do get usually the next day a mail that the problem has been fixed and a download link to the new version. The guys at Ipcas are even so helpful to log in to your machine via a Citrix Client to help you to get started profiling your actual application you want to profile. After a 2h telco I was converted from a hater to a believer of this tool. The fast response time might also have something to do with the fact that they are actively working on 4.5 to get out of the door. But still the support is by far the best I have encountered so far. The only downside is that you should instrument your assemblies including the .NET Framework to get most accurate numbers. You can profile without doing it but then you will see very high JIT times in your process which can severely affect the correctness of the measured timings. If you do not care about exact numbers you can also enable in the main UI in the Data Trace tab logging of method arguments of primitive types. If you need to know what files at which times were opened by your application you can find it out without a debugger. Since SpeedTrace does read huge trace files in its reader you should perhaps use a 64 bit machine to be able to analyze bigger traces as well. The memory consumption of the trace reader is too high for my taste. But they did promise for the next version to come up with something much improved.

    Read the article

  • "A copy of Firefox is already open. Only one copy of Firefox can be open at a time."

    - by Isaac Waller
    I cannot start Firefox on my Mac. It just says "A copy of Firefox is already open. Only one copy of Firefox can be open at a time." I have tried restarting the computer. Any fixes? You have suggested deleting the lock files in my profile, but, I don't have a profile. I was trying to fix the problem in question http://superuser.com/questions/3275/firefox-on-mac-slow-slow-slow by deleting my profile, so I deleted it, and this came up. So I cannot delete the lock files because they don't exist.

    Read the article

  • "A copy of Firefox is already open. Only one copy of Firefox can be open at a time."

    - by Isaac Waller
    I cannot start Firefox on my Mac. It just says "A copy of Firefox is already open. Only one copy of Firefox can be open at a time." I have tried restarting the computer. Any fixes? You have suggested deleting the lock files in my profile, but, I don't have a profile. I was trying to fix the problem in question http://superuser.com/questions/3275/firefox-on-mac-slow-slow-slow by deleting my profile, so I deleted it, and this came up. So I cannot delete the lock files because they don't exist.

    Read the article

  • linux "date -s" command not working to change date on a server

    - by nelaar
    date +%T --set="12:19:06" 12:19:06 date Mon Nov 26 12:37:32 SAST 2012 I have tried many different forms of this command but nothing seams to work. In changing the date on this computer server running as VM is not working. Our messages log show messages like thise ntpd[3496]: time correction of -1098 seconds exceeds sanity limit (1000); set clock manually to the correct UTC time. Our server is now about 20 minutes out. It seams like our server has not been updating the time correctly for a few days. Nov 22 19:29:23 hostname ntpd[1818]: time reset -998.577519 s Nov 22 19:32:34 hostname ntpd[1818]: synchronized to LOCAL(0), stratum 10 Nov 22 19:33:39 hostname ntpd[1818]: synchronized to 41.134.20.28, stratum 1 Nov 22 19:52:30 hostname ntpd[1818]: time reset -998.992426 s Nov 22 19:55:47 hostname ntpd[1818]: synchronized to LOCAL(0), stratum 10 Nov 22 19:56:53 hostname ntpd[1818]: synchronized to 41.134.20.28, stratum 1 Nov 22 20:13:04 hostname ntpd[1818]: time reset -999.374412 s Nov 22 20:16:40 hostname ntpd[1818]: synchronized to LOCAL(0), stratum 10 Nov 22 20:17:44 hostname ntpd[1818]: synchronized to 41.134.20.28, stratum 1 Nov 22 20:32:02 hostname ntpd[1818]: time reset -999.716832 s Nov 22 20:35:28 hostname ntpd[1818]: synchronized to LOCAL(0), stratum 10 Nov 22 20:36:16 hostname ntpd[1818]: synchronized to 41.134.20.28, stratum 1 Nov 22 20:56:39 hostname ntpd[1818]: time correction of -1000 seconds exceeds sanity limit (1000); set clock manually to the correct UTC time.

    Read the article

  • PHP's page generation time takes 0.01s. 1/0.01 = 100; however i'm having problems reaching that number of request per seconds. Why?

    - by cedivad
    On average, my PHP page generation time is 10ms. So i should be able to execute 100 requests one after the other one (using a single core on the server, since that php is not multithreaded). However, i'm having problems reaching 50 pages per seconds. As of now i do 25 on avg., with a medium load. The application is really light, it consist in a read (<5KB) from a pool of SSDs, some read queries solved by indexes. Where should i look to solve this bottleneck?

    Read the article

  • Try to fill the GAE datastore but the code consumes to much cpu time. How to optimize this?

    - by Neverland
    I try to get the list of images in Amazon EC2 inside the Google datastore. I want to realize this with a cron job inside the GAE. class AmazonEC2uswest(db.Model): ami = db.StringProperty(required=True) mani = db.StringProperty() typ = db.StringProperty() arch = db.StringProperty() state = db.StringProperty() owner = db.StringProperty() class CronAMIsAmazonUS_WEST(webapp.RequestHandler): def get(self): aws_access_key_id_admin = "<secret>" aws_secret_access_key_admin = "<secret>" conn_us_west = boto.ec2.connect_to_region('us-west-1', aws_access_key_id=aws_access_key_id_admin, aws_secret_access_key=aws_secret_access_key_admin, is_secure = False) liste_images_us_west = conn_us_west.get_all_images() laenge_liste_images_us_west = len(liste_images_us_west) for i in range(laenge_liste_images_us_west): datastore_uswest_AMIs = AmazonEC2uswest(ami=liste_images_us_west[i].id, mani=str(liste_images_us_west[i].location), typ=liste_images_us_west[i].type, arch=liste_images_us_west[i].architecture, state=liste_images_us_west[i].state, owner=liste_images_us_west[i].ownerId) datastore_uswest_AMIs.put() The problem: Getting the list with get_all_images() lasts only a few seconds. But writing the data to the Google datastore needs way too much CPU time. My IBM T42p (P4M with 2GHz) needs for that piece of code approx. 1 Minute! Is it possible to optimize my code in a way that it needs fewer CPU time?

    Read the article

  • Fast Data: Go Big. Go Fast.

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 For those of you who may have missed it, today’s second full day of Oracle OpenWorld 2012 started with a rumpus. Joe Tucci, from EMC outlined the human face of big data with real examples of how big data is transforming our world. And no not the usual tried-and-true weblog examples, but real stories about taxi cab drivers in Singapore using big data to better optimize their routes as well as folks just trying to get a better hair cut. Next we heard from Thomas Kurian who talked at length about the important platform characteristics of Oracle’s Cloud and more specifically Oracle’s expanded Cloud Services portfolio. Especially interesting to our integration customers are the messaging support for Oracle’s Cloud applications. What this means is that now Oracle’s Cloud applications have a lightweight integration fabric that on-premise applications can communicate to it via REST-APIs using Oracle SOA Suite. It’s an important element to our strategy at Oracle that supports this idea that whether your requirements are for private or public, Oracle has a solution in the Cloud for all of your applications and we give you more deployment choice than any vendor. If this wasn’t enough to get the juices flowing, later that morning we heard from Hasan Rizvi who outlined in his Fusion Middleware session the four most important enterprise imperatives: Social, Mobile, Cloud, and a brand new one: Fast Data. Today, Rizvi made an important step in the definition of this term to explain that he believes it’s a convergence of four essential technology elements: Event Processing for event filtering, business rules – with Oracle Event Processing Data Transformation and Loading - with Oracle Data Integrator Real-time replication and integration – with Oracle GoldenGate Analytics and data discovery – with Oracle Business Intelligence Each of these four elements can be considered (and architect-ed) together on a single integrated platform that can help customers integrate any type of data (structured, semi-structured) leveraging new styles of big data technologies (MapReduce, HDFS, Hive, NoSQL) to process more volume and variety of data at a faster velocity with greater results.  Fast data processing (and especially real-time) has always been our credo at Oracle with each one of these products in Fusion Middleware. For example, Oracle GoldenGate continues to be made even faster with the recent 11g R2 Release of Oracle GoldenGate which gives us some even greater optimization to Oracle Database with Integrated Capture, as well as some new heterogeneity capabilities. With Oracle Data Integrator with Big Data Connectors, we’re seeing much improved performance by running MapReduce transformations natively on Hadoop systems. And with Oracle Event Processing we’re seeing some remarkable performance with customers like NTT Docomo. Check out their upcoming session at Oracle OpenWorld on Wednesday to hear more how this customer is using Event processing and Big Data together. If you missed any of these sessions and keynotes, not to worry. There's on-demand versions available on the Oracle OpenWorld website. You can also checkout our upcoming webcast where we will outline some of these new breakthroughs in Data Integration technologies for Big Data, Cloud, and Real-time in more details. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Oracle Products Reflect Key Trends Shaping Enterprise 2.0

    - by kellsey.ruppel(at)oracle.com
    Following up on his predictions for 2011, we asked Enterprise 2.0 veteran Andy MacMillan to map out the ways Oracle solutions are at the forefront of industry trends--and how Oracle customers can benefit in the coming year. 1. Increase organizational awareness | Oracle WebCenter Suite Oracle WebCenter Suite provides a unique set of capabilities to drive organizational awareness. In particular, the expansive activity graph connects users directly to key enterprise applications, activities, and interests. In this way, applicable and critical business information is automatically and immediately visible--in the context of key tasks--via real-time dashboards and comprehensive reporting. Oracle WebCenter Suite also integrates key E2.0 services, such as blogs, wikis, and RSS feeds, into critical business processes, including back-office systems of records such as ERP and CRM systems. 2. Drive online customer engagement | Oracle Real-Time Decisions With more and more business being conducted on the Web, driving increased online customer engagement becomes a critical key to success. This effort is usually spearheaded by an increasingly important executive role, the Head of Online, who usually reports directly to the CMO. To help manage the Web experience online, Oracle solutions are driving a new kind of intelligent social commerce by combining Oracle Universal Content Management, Oracle WebCenter Services, and Oracle Real-Time Decisions with leading e-commerce and product recommendations. Oracle Real-Time Decisions provides multichannel recommendations for content, products, and services--including seamless integration across Web, mobile, and social channels. The result: happier customers, increased customer acquisition and retention, and improved critical success metrics such as shopping cart abandonment. 3. Easily build composite applications | Oracle Application Development Framework Thanks to the shared user experience strategy across Oracle Fusion Middleware, Oracle Fusion Applications and many other Oracle Applications, customers can easily create real, customer-specific composite applications using Oracle WebCenter Suite and Oracle Application Development Framework. Oracle Application Development Framework components provide modular user interface components that can build rich, social composite applications. In addition, a broad set of components spanning BPM, SOA, ECM, and beyond can be quickly and easily incorporated into composite applications. 4. Integrate records management into a global content platform | Oracle Enterprise Content Management 11g Oracle Enterprise Content Management 11g provides leading records management capabilities as part of a unified ECM platform for managing records, documents, Web content, digital assets, enterprise imaging, and application imaging. This unique strategy provides comprehensive records management in a consistent, cost-effective way, and enables organizations to consolidate ECM repositories and connect ECM to critical business applications. 5. Achieve ECM at extreme scale | Oracle WebLogic Server and Oracle Exadata To support the high-performance demands of a unified and rationalized content platform, Oracle has pioneered highly scalable and high-performing ECM infrastructures. Two innovations in particular helped make this happen. The core ECM platform itself moved to an Enterprise Java architecture, so organizations can now use Oracle WebLogic Server for enhanced scalability and manageability. Oracle Enterprise Content Management 11g can leverage Oracle Exadata for extreme performance and scale. Likewise, Oracle Exalogic--Oracle's foundation for cloud computing--enables extreme performance for processor-intensive capabilities such as content conversion or dynamic Web page delivery. Learn more about Oracle's Enterprise 2.0 solutions.

    Read the article

  • Exposed: Fake Social Marketing

    - by Mike Stiles
    Brands and marketers who want to build their social popularity on a foundation of lies are starting to face more of an uphill climb. Fake social is starting to get exposed, and there are a lot of emperors getting caught without any clothes. Facebook is getting ready to do a purge of “Likes” on Pages that were a result of bots, fake accounts, and even real users who were duped or accidentally Liked a Page. Most of those accidental Likes occur on mobile, where it’s easy for large fingers to hit the wrong space. Depending on the degree to which your Page has been the subject of such activity, you may see your number of Likes go down. But don’t sweat it, that’s a good thing. The social world has turned the corner and assessed the value of a Like. And the verdict is that a Like is valuable as an opportunity to build a real relationship with a real customer. Its value pales immensely compared to a user who’s actually engaged with the brand. Those fake Likes aren’t doing you any good. Huge numbers may once have impressed, but it’s not fooling anybody anymore. Facebook’s selling point to marketers is the ability to use a brand’s fans to reach friends of those fans. Consequently, there has to be validity and legitimacy to a fan count. Speaking of mobile, Trademob recently reported 40% of clicks are essentially worthless, because 22% of them are accidental (again with the fat fingers), while 18% are trickery. Publishers will but huge banner ads next to tiny app buttons to increase the odds of an accident. Others even hide a banner behind another to score 2 clicks instead of 1. Pontiflex and Harris Interactive last year found 47% of users were more likely to click a mobile ad accidentally than deliberately. Beyond that, hijacked devices are out there manipulating click data. But to what end for a marketer? What’s the value of a click on something a user never even saw? What’s the value of a seen but accidentally clicked ad if there’s no resulting transaction? Back to fake Likes, followers and views; they’re definitely for sale on numerous sites, none of which I’ll promote. $5 can get you 1,000 Twitter followers. You can even get followers targeted by interests. One site was set up by an unemployed accountant out of his house in England. He gets them from a wholesaler in Brooklyn, who gets them from a 19-year-old supplier in India. The unemployed accountant is making $10,000 a day. That means a lot of brands, celebrities and organizations are playing the fake social game, apparently not coming to grips with the slim value of the numbers they’re buying. But now, in addition to having paid good money for non-ROI numbers, there’s the embarrassment factor. At least a couple of sites have popped up allowing anyone to see just how many fake and inactive followers you have. Britain’s Fake Follower Check and StatusPeople are the two getting the most attention. Enter any Twitter handle and the results are there for all to see. Fake isn’t good, period. “Inactive” could be real followers, but if they’re real, they’re just watching, not engaging. If someone runs a check on your Twitter handle and turns up fake followers, does that mean you’re suspect or have purchased followers? No. Anyone can follow anyone, so most accounts will have some fakes. Even account results like Barack Obama’s (70% fake according to StatusPeople) and Lady Gaga’s (71% fake) don’t mean these people knew about all those fakes or initiated them. Regardless, brands should realize they’re now being watched, and users are judging the legitimacy of their social channels. Use one of any number of tools available to assess and clean out fake Likes and followers so that your numbers are as genuine as possible. And obviously, skip the “buying popularity” route of social marketing strategy. It doesn’t work and it gets you busted…a losing combination.

    Read the article

  • Using Live Data in Database Development Work

    - by Phil Factor
    Guest Editorial for Simple-Talk Newsletter... in which Phil Factor reacts with some exasperation when coming across a report that a majority of companies were still using financial and personal data for both developing and testing database applications. If you routinely test your development work using real production data that contains personal or financial information, you are probably being irresponsible, and at worst, risking a heavy financial penalty for your company. Surprisingly, over 80% of financial companies still do this. Plenty of data breaches and fraud have happened from the use of real data for testing, and a data breach is a nightmare for any organisation that suffers one. The cost of each data breach averages out at around $7.2 million in the US in notification, escalation, credit monitoring, fines, litigation, legal costs, and lost business due to customer churn, £1.9 million in the UK. 70% of data breaches are done from within the organisation. Real data can be exploited in a number of ways for malicious or criminal purposes. It isn't just the obvious use of items such as name and address, date of birth, social security number, and credit card and bank account numbers: Data can be exploited in many subtle ways, so there are excellent reasons to ensure that a high priority is given to the detection and prevention of any data breaches. You'll never successfully guess all the ways that real data can be exploited maliciously, or the ease with which it can be accessed. It would be silly to argue that developers never need access to a copy of the database containing live data. Developers sometimes need to track a bug that can only be replicated on the data from the live database. However, it has to be done in a very restrictive harness. The law makes no distinction between development and production databases when a data breach occurs, so the data has to be held with all appropriate security measures in place. In Europe, the use of personal data for testing requires the explicit consent of the people whose data is being held. There are federal standards such as GLBA, PCI DSS and HIPAA, and most US States have privacy legislation. The task of ensuring compliance and tight security in such circumstances is an expensive and time-consuming overhead. The developer is likely to suffer investigation if a data breach occurs, even if the company manages to stay in business. Ironically, the use of copies of live data isn't usually the most effective way to develop or test your data. Data is usually time-specific and isn't usually current by the time it is used for testing, Existing data doesn't help much for new functionality, and every time the data is refreshed from production, any test data is likely to be overwritten. Also, it is not always going to test all the 'edge' conditions that are likely to flush out bugs. You still have the task of simulating the dynamics of actual usage of the database, and here you have no alternative to creating 'spoofed' data. Because of the complexities of relational data, It used to be that there was no realistic alternative to developing and testing with live data. However, this is no longer the case. Real data can be obfuscated, or it can be created entirely from scratch. The latter process used to be impractical, now that there are plenty of third-party tools to choose from. The process of obfuscation isn't risk free. The process must access the live data, and the success of the obfuscation process has to be carefully monitored. Database data security isn't an exciting topic to you or I, but to a hacker it can be an all-consuming obsession, especially if there is financial or political gain involved. This is not the sort of adversary one would wish for and it is far better to accept, and work with, security restrictions that exist for using live data in database development work, especially when the tools exist to create large realistic database test data that can be better for several aspects of testing.

    Read the article

  • #OOW 2012: Big Data and The Social Revolution

    - by Eric Bezille
    As what was saying Cognizant CSO Malcolm Frank about the "Futur of Work", and how the Business should prepare in the face of the new generation  not only of devices and "internet of things" but also due to their users ("The Millennials"), moving from "consumers" to "prosumers" :  we are at a turning point today which is bringing us to the next IT Architecture Wave. So this is no more just about putting Big Data, Social Networks and Customer Experience (CxM) on top of old existing processes, it is about embracing the next curve, by identifying what processes need to be improve, but also and more importantly what processes are obsolete and need to be get ride of, and new processes put in place. It is about managing both the hierarchical and structured Enterprise and its social connections and influencers inside and outside of the Enterprise. And this does apply everywhere, up to the Utilities and Smart Grids, where it is no more just about delivering (faster) the same old 300 reports that have grown over time with those new technologies but to understand what need to be looked at, in real-time, down to an hand full relevant reports with the KPI relevant to the business. It is about how IT can anticipate the next wave, and is able to answers Business questions, and give those capabilities in real-time right at the hand of the decision makers... This is the turning curve, where IT is really moving from the past decade "Cost Center" to "Value for the Business", as Corporate Stakeholders will be able to touch the value directly at the tip of their fingers. It is all about making Data Driven Strategic decisions, encompassed and enriched by ALL the Data, and connected to customers/prosumers influencers. This brings to stakeholders the ability to make informed decisions on question like : “What would be the best Olympic Gold winner to represent my automotive brand ?”... in a few clicks and in real-time, based on social media analysis (twitter, Facebook, Google+...) and connections link to my Enterprise data. A true example demonstrated by Larry Ellison in real-time during his yesterday’s key notes, where “Hardware and Software Engineered to Work Together” is not only about extreme performances but also solutions that Business can touch thanks to well integrated Customer eXperience Management and Social Networking : bringing the capabilities to IT to move to the IT Architecture Next wave. An example, illustrated also todays in 2 others sessions, that I had the opportunity to attend. The first session bringing the “Internet of Things” in Oil&Gaz into actionable decisions thanks to Complex Event Processing capturing sensors data with the ready to run IT infrastructure leveraging Exalogic for the CEP side, Exadata for the enrich datasets and Exalytics to provide the informed decision interface up to end-user. The second session showing Real Time Decision engine in action for ACCOR hotels, with Eric Wyttynck, VP eCommerce, and his Technical Director Pascal Massenet. I have to close my post here, as I have to go to run our practical hands-on lab, cooked with Olivier Canonge, Christophe Pauliat and Simon Coter, illustrating in practice the Oracle Infrastructure Private Cloud recently announced last Sunday by Larry, and developed through many examples this morning by John Folwer. John also announced today Solaris 11.1 with a range of network innovation and virtualization at the OS level, as well as many optimizations for applications, like for Oracle RAC, with the introduction of the lock manager inside Solaris Kernel. Last but not least, he introduced Xsigo Datacenter Fabric for highly simplified networks and storage virtualization for your Cloud Infrastructure. Hoping you will get ready to jump on the next wave, we are here to help...

    Read the article

  • SQL Server IO handling mechanism can be severely affected by high CPU usage

    - by sqlworkshops
    Are you using SSD or SAN / NAS based storage solution and sporadically observe SQL Server experiencing high IO wait times or from time to time your DAS / HDD becomes very slow according to SQL Server statistics? Read on… I need your help to up vote my connect item – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage. Instead of taking few seconds, queries could take minutes/hours to complete when CPU is busy.In SQL Server when a query / request needs to read data that is not in data cache or when the request has to write to disk, like transaction log records, the request / task will queue up the IO operation and wait for it to complete (task in suspended state, this wait time is the resource wait time). When the IO operation is complete, the task will be queued to run on the CPU. If the CPU is busy executing other tasks, this task will wait (task in runnable state) until other tasks in the queue either complete or get suspended due to waits or exhaust their quantum of 4ms (this is the signal wait time, which along with resource wait time will increase the overall wait time). When the CPU becomes free, the task will finally be run on the CPU (task in running state).The signal wait time can be up to 4ms per runnable task, this is by design. So if a CPU has 5 runnable tasks in the queue, then this query after the resource becomes available might wait up to a maximum of 5 X 4ms = 20ms in the runnable state (normally less as other tasks might not use the full quantum).In case the CPU usage is high, let’s say many CPU intensive queries are running on the instance, there is a possibility that the IO operations that are completed at the Hardware and Operating System level are not yet processed by SQL Server, keeping the task in the resource wait state for longer than necessary. In case of an SSD, the IO operation might even complete in less than a millisecond, but it might take SQL Server 100s of milliseconds, for instance, to process the completed IO operation. For example, let’s say you have a user inserting 500 rows in individual transactions. When the transaction log is on an SSD or battery backed up controller that has write cache enabled, all of these inserts will complete in 100 to 200ms. With a CPU intensive parallel query executing across all CPU cores, the same inserts might take minutes to complete. WRITELOG wait time will be very high in this case (both under sys.dm_io_virtual_file_stats and sys.dm_os_wait_stats). In addition you will notice a large number of WAITELOG waits since log records are written by LOG WRITER and hence very high signal_wait_time_ms leading to more query delays. However, Performance Monitor Counter, PhysicalDisk, Avg. Disk sec/Write will report very low latency times.Such delayed IO handling also occurs to read operations with artificially very high PAGEIOLATCH_SH wait time (with number of PAGEIOLATCH_SH waits remaining the same). This problem will manifest more and more as customers start using SSD based storage for SQL Server, since they drive the CPU usage to the limits with faster IOs. We have a few workarounds for specific scenarios, but we think Microsoft should resolve this issue at the product level. We have a connect item open – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage - (with example scripts) to reproduce this behavior, please up vote the item so the issue will be addressed by the SQL Server product team soon.Thanks for your help and best regards,Ramesh MeyyappanHome: www.sqlworkshops.comLinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Why does my Messaging Menu code not work when split into functions?

    - by fluteflute
    Below are two python programs. They're exactly the same, except for one is split into two functions. However only the one that's split into two functions doesn't work - the second function doesn't work. Why would this be? Note the code is taken from this useful blog post. Without functions (works): import gtk def show_window_function(x, y): print x print y # get the indicate module, which does all the work import indicate # Create a server item mm = indicate.indicate_server_ref_default() # If someone clicks your server item in the MM, fire the server-display signal mm.connect("server-display", show_window_function) # Set the type of messages that your item uses. It's not at all clear which types # you're allowed to use, here. mm.set_type("message.im") # You must specify a .desktop file: this is where the MM gets the name of your # app from. mm.set_desktop_file("/usr/share/applications/nautilus.desktop") # Show the item in the MM. mm.show() # Create a source item mm_source = indicate.Indicator() # Again, it's not clear which subtypes you are allowed to use here. mm_source.set_property("subtype", "im") # "Sender" is the text that appears in the source item in the MM mm_source.set_property("sender", "Unread") # If someone clicks this source item in the MM, fire the user-display signal mm_source.connect("user-display", show_window_function) # Light up the messaging menu so that people know something has changed mm_source.set_property("draw-attention", "true") # Set the count of messages in this source. mm_source.set_property("count", "15") # If you prefer, you can set the time of the last message from this source, # rather than the count. (You can't set both.) This means that instead of a # message count, the MM will show "2m" or similar for the time since this # message arrived. # mm_source.set_property_time("time", time.time()) mm_source.show() gtk.mainloop() With functions (second function is executed but doesn't actually work): import gtk def show_window_function(x, y): print x print y # get the indicate module, which does all the work import indicate def function1(): # Create a server item mm = indicate.indicate_server_ref_default() # If someone clicks your server item in the MM, fire the server-display signal mm.connect("server-display", show_window_function) # Set the type of messages that your item uses. It's not at all clear which types # you're allowed to use, here. mm.set_type("message.im") # You must specify a .desktop file: this is where the MM gets the name of your # app from. mm.set_desktop_file("/usr/share/applications/nautilus.desktop") # Show the item in the MM. mm.show() def function2(): # Create a source item mm_source = indicate.Indicator() # Again, it's not clear which subtypes you are allowed to use here. mm_source.set_property("subtype", "im") # "Sender" is the text that appears in the source item in the MM mm_source.set_property("sender", "Unread") # If someone clicks this source item in the MM, fire the user-display signal mm_source.connect("user-display", show_window_function) # Light up the messaging menu so that people know something has changed mm_source.set_property("draw-attention", "true") # Set the count of messages in this source. mm_source.set_property("count", "15") # If you prefer, you can set the time of the last message from this source, # rather than the count. (You can't set both.) This means that instead of a # message count, the MM will show "2m" or similar for the time since this # message arrived. # mm_source.set_property_time("time", time.time()) mm_source.show() function1() function2() gtk.mainloop()

    Read the article

  • Problem with TimeZoneInfo.ConvertTime: missed the Daylight Saving switch.

    - by SirMoreno
    My web app runs on .Net 3.5, all of the dates are saved on the DB in UTC time (not in user time). When I want to display a date I convert it to user date (from UTC) //Get the current datetime of the user exp: GMT TO ISRAEL +2 public static DateTime GetUserDateTime(DateTime dateUTC) { string userTzId = "Israel Standard Time"; TimeZoneInfo userTZ = TimeZoneInfo.FindSystemTimeZoneById(userTzId); dateUTC = DateTime.SpecifyKind(dateUTC, DateTimeKind.Utc); DateTime ret = TimeZoneInfo.ConvertTime(dateUTC, TimeZoneInfo.Utc, userTZ); return ret; } Until now it worked fine but I have users from Israel (GMT +2), and Israel switched to Daylight saving time on 26/3/10 so now it's (GMT +3). For some reason the TimeZoneInfo.ConvertTime don't know the Daylight saving time switch is on 26/3/10 so it still converts to GMT +2. The strange thing is that on localhost it works fine, I set up a test page: DateTime userdate = GetUserDateTime(DateTime.UtcNow); string str2 = "UserDateTime = " + userdate.ToString("dd/MM/yy") + " " + userdate.ToString("HH:mm"); On the Server (windows 2003 set to UTC time) it shows the wrong time (+2): UserDateTime = 27/03/10 21:38 On localhost (windows XP set to Israel Time) it shows the correct time (+3): UserDateTime = 27/03/10 22:38 How can I update the TimeZoneInfo that the Daylight saving time switch in Israel was on the 26/3/10? Thanks.

    Read the article

< Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >