Search Results

Search found 13375 results on 535 pages for 'agile tools'.

Page 452/535 | < Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >

  • Recover windows cached domain password

    - by theguy
    I have a computer from another small organization that works with our school. It was previously joined to another domain from elsewhere. The organization doesn't have an IT person so they didn't think of what they needed to do about the information on the computer before they moved it to our school. The previous user of the computer is no longer with the organization so no information about the password. The computer has information that needs to be accessed and programs so putting the hard drive on another computer and grabbing the information is a no go as I need the computer itself to be working as well. The computer is running Windows Vista Business Edition and is joined to a domain with a cached profile. The admin accounts are disabled by GPO. I've been asked to see if I could recover the password but running ophcrack gave me no hits on the cached profile. I'm not too familiar with password recovery tools that would work on a cached profile from a domain so I'm looking for answers here. Any other suggestions? Preferably something free as we're a small school and an easy to use liveCD solution like ophcrack would be appreciated.

    Read the article

  • Restoring a fresh home folder in a shared user domain environment

    - by Cocoabean
    I am using a tool called pGINA that adds another credential provider to my Windows 7 clients so we can authenticate campus users via campus LDAP. We have the default Windows credential providers setup to authenticate off of our Active Directory, but we have students in our classes that don't have entries in our AD, and we need to know who they are to allow them internet access. Once these LDAP users login using pGINA, they are all redirected to the same AD account, a 'kiosk' account with GPOs in place to prevent anything malicious. My concern is that my users will accidentally save personal login information or files in that shared profile, and another user may login later and have access to a previous user's Gmail account, as the AppData folder on each computer is shared by anyone logging into the kiosk user. I've looked into MS's 'roll-your-own' SteadyState but it didn't seem to have what I wanted. I tried to write a PS script to copy a pre-saved clean version of the profile from a network share, but I just kept running into issues with CredSSP delegation and accessing the share from the UNC path. Others have recommended something like DeepFreeze but I'd like to do it without 3rd party tools if possible.

    Read the article

  • Setting up a global MySQL Cluster in the cloud

    - by GregB
    I'm giving the question an overhaul to more specifically identify where I need help. I use two tools to manage a bunch of cloud server: Puppet and Rundeck. Both of these can be configured to use a mysql backend. I'd like to setup an instance of each application in both the U.S., and the U.K., treating the U.K. servers as hot stand-bys in case of failure in the U.S. I want to use a MySql cluster so that the data is automatically replicated from the U.S. to the U.K. Because these are hot standbys, high performance is not a goal. Redundancy and data integrity are most important. My question revolves around the setup of the mysql cluster. I want to run three servers, each one running a data node, a sql node, and a management node. Is this a valid configuration for mysql server? If so, could someone point me in the right direction for creating such a setup? I've downloaded the offical tarball, and the official debian, and the documentation for them contradicts many of the online tutorials. I'm installing on Ubuntu 10.04.

    Read the article

  • What Windows app can sort a huge XML file?

    - by Torben Gundtofte-Bruun
    I have some enormous XML-based configuration files, with 125000 lines in them. The problem is that they are auto-generated by the system I use, and "child" tags are in a random order within their respective parent tag. This means that a diff comparison is impossible. I want to recursively sort all tags within a parent tag by the value in name="". Some parent tags only appear once and don't have a name="" parameter; these should be sorted by the tag name itself. Once the files are sorted like this, they can be compared quite easily using normal tools. We are currently using ExamXML which can match unsorted XML files, but it fails because the files are too big. Is there an application that can do this? (Windows much preferred; Linux only as a last resort) I do not want to dive into development or XSLT jobs. I am thinking that someone must have made a simple sorting tool like this already - I just can't find it using Google. Update: With help from this site, I created a small package that I want to share: XML-Sorter_v0.3.zip Update: Follow-up question here.

    Read the article

  • How to detect/list rogue computers connected to a WIFI network without access to the Wifi Router interface? [migrated]

    - by JJarava
    This is what I believe to be an interesting challenge :) A relative (that leaves a bit too far to go there in person) is complaining that their WIFI/Internet network performance has gone down abysmally lately. She'd like to know if some of the neighbors are using her wifi network to access the internet but she's not too technically savvy. I know that the best way to prevent issues would be to change the Router password, but it's a bit of a PITA having to re-configure all wifi devices... and if the uninvited guest broke the password once, they can do it again... Her wifi router/internet connection is provided by the telco, and remotely managed so she can log-on to their telco account's page and remotely change the router's Wifi password, but doesn't have access to the router status page/config/etc unless she opts out of the telco's remote support and mainteinance service... So, how could she check if there are guests in the wifi with this restrictions and in the most "point and click way"? In this case I'd probably use nmap to look for other devices in the network, but I'm not sure if that's the easiest way to do it. I'm not a wifi expert, so I don't know if there are any wifi-scanning utils that can tell us who's talking to the router... Lastly, she's a Windows user as I guess that'll influence the choice of tools available Any suggestions more than welcome Regards!

    Read the article

  • Win 7 firewall won't turn on, nor the McAfee firewall. Hit by "Win 7 Anti-virus 2012" trojan. Removed, but a downed firewall is a lasting legacy

    - by PhxTitan
    I caught the Trojan right away, I think, but both my McAfee & Win 7 (x64) firewalls are not able to be engaged/turned on now. MS Error Code 0x80070424 when attempting to turn on Win 7 firewall. No viruses. Swept it with McAfee AV, Malwarebytes Anti-Malware, Microsoft malware removal tools. Followed Microsoft's three courses of alternative actions they posted for instructions for getting the Win 7 firewall back up and on. Nothing. Same error code. The post just said see MS support if those fixes failed. So I removed McAfee altogether. Still Win 7 (professional version) firewall won't come on; and clean of detectable bugs. And I'm fully updated with MS Windows 7 updates as well, which is no longer automatic, that too a legacy of the trojan bug I think. Any thoughts on how to get the Win 7 firewall operational??? And auto updating reengaged?

    Read the article

  • Is there any way to synchronize AD users with Office 365 but still be able to edit them online?

    - by Massimo
    I'm performing a migration to Office 365 from a third-party mail server (MDaemon); the local Active Directory doesn't include any Exchange server, and never had any. We will need directory synchronization in order to enable users to log on to Office 365 using their domain credentials; but it seems that as soon as you enable directory synchronization, you can't perform any action anymore on Office 365 users: all changes need to be made on the local Active Directory, and then replicated by the synchronization process. For ordinary users with a single e-mail address and standard features, this is not a big problem; but what about users which need an additional address? What if I need to configure some nonstandard setting, like "hide from address list" or a custom mailbox quota? From what I've gathered, the only supported way to do this, as you can't directly edit Office 365 objects anymore after synchronization is enabled, is to extend the local AD schema with Exchange attributes, and then manually edit them (!). Or, you can install at least one local Exchange server, and then use the Exchange administrative tools to configure the required settings. Is this correct or am I missing something? Is there any way to synchronize user accounts and password, but still be able to edit user settings directly in Office 365? If not (everything really needs to be set locally and then synchronized), is there any simpler way to do this than manually editing LDAP attributes or installing a local Exchange server?

    Read the article

  • How can I prevent a DDOS attack on Amazon EC2?

    - by cwd
    One of the servers I use is hosted on the Amazon EC2 cloud. Every few months we appear to have a DDOS attack on this sever. This slows the server down incredibly. After around 30 minutes, and sometimes a reboot later, everything is back to normal. Amazon has security groups and firewall, but what else should I have in place on an EC2 server to mitigate or prevent an attack? From similar questions I've learned: Limit the rate of requests/minute (or seconds) from a particular IP address via something like IP tables (or maybe UFW?) Have enough resources to survive such an attack - or - Possibly build the web application so it is elastic / has an elastic load balancer and can quickly scale up to meet such a high demand) If using mySql, set up mySql connections so that they run sequentially so that slow queries won't bog down the system What else am I missing? I would love information about specific tools and configuration options (again, using Linux here), and/or anything that is specific to Amazon EC2. ps: Notes about monitoring for DDOS would also be welcomed - perhaps with nagios? ;)

    Read the article

  • Xen domU mem-set issue

    - by Casper Langemeijer
    I'm running into a problem on my xen 4.0.1 server (debian squeeze) My host has 32G of memory, Domain-0 has 2048 M assigned to it. (scaled down with xm mem-set Domain-0 2048) top in Domain-0 confirms this. I created a virtual machine config file (using xen-tools) with the following options: memory = '512' maxmem = '2048' Both host and guest machines are running the standard 2.6.32-5-xen-amd64 debian kernel. 'xm create' creates a virtual machine with 512MB of memory as expected. Then 'xm mem-set domU 1024' will not expand the memory to 1024MB running 'xm mem-set domU 400' does set the memory to about 400MB Then 'xm mem-set domU 1024' will expands the memory back to 512MB Based on this, you would say that xm ignores the maxmem and silently sets maxmem to 512, but in the output of xm top the MAXMEM column reads 2G. the MEM column will not go over 512M. The output of xm list tells another story, it shows 1024 when I 'xm mem-set domU 1024'. I've googled myself all away around the internet for this issue and found that most people don't scale back Domain-0. I know I've seen a bugreport about the issue I'm experiencing, but can't find it anymore. Does anyone see what I'm doing wrong here? Hmm.. I just upgraded my kernel to the one provided by debian backports. The issue has gone.

    Read the article

  • De-duplicating backup tool on a block basis? [closed]

    - by SST
    I am looking for an (ideally free as in speech or beer) backup tool for Unix-like OS which can store deduplicated backups, i.e. only nonredundant content takes up additional space. I already looked at dirvish (my first candidate) and rsnapshot which use hardlinks to achieve deduplication on a per-file level. However, as I want to back up large files (Thunderbird mailboxes 3GB, VMware images 10GB), such file are stored again entirely even if just a few bytes change. Then there are rsync-based tools like rdiff-backup which only store deltas and a current mirror. However, as the deltas are generated against each previous mirror, it is difficult to fine-tune the retention granularity (only keep one backup after a week, etc.) because the deltas would have to be re-evaluated. Another approach is to partition content into blocks and store each block only if it is not stored yet, otherwise just linking it to the first occurrence. The only tool I know of that does this by now is obnam (http://liw.fi/obnam), and it even supports zlib-compression and gpg-encryption -- nice! But it is very slow, AFAICT. Does any one know any other, solid backup software which supports deduplication on a sub-file level, ideally with at least some management options (show/select/delete generations...)?

    Read the article

  • Formal separation marker of syslog events?

    - by Server Horror
    I've been looking at RFC5424 to find the formally specified marker that will end a syslog event. Unfortunately I couldn't find it. So If I wanted to implement some small syslog server that reacts on certain messages what is the marker that ends a message (yes commonly an event is a single line, but I just couldn't find it in the specification) Clarification: I call it event because I associate a message with a single line. An event could possibly be some thing like Type: foo Source: webservers whereas a message to me is this: Type: foo Source: webservers http://tools.ietf.org/html/rfc5424#section-6 defines: SYSLOG-MSG = HEADER SP STRUCTURED-DATA [SP MSG] neither STRUCTURED-DATA nor MSG tell me how these fields end. Especially MSG is defined as as MSG-ANY / MSG-UTF8 which expands to virtually anything. There's nothing that says a newline marks the end (or an 8 or an a for that matter). Given the example messages (section 6.5): This is one valid message, or 2 valid messages depending on wether you say that a HEADER element must never occur in any MSG element: literal whitespace <34>1 2003-10-11T22:14:15.003Z mymachine.example.com su - ID47 - <34>1 2003-10-11T22:14:15.003Z mymachine.example.com su - ID47 | is this an end marker? \t stands for a tab <34>1 2003-10-11T22:14:15.003Z mymachine.example.com su - ID47 -\t<34>1 2003-10-11T22:14:15.003Z mymachine.example.com su - ID47 | is this an end marker? \n stands for a newline <34>1 2003-10-11T22:14:15.003Z mymachine.example.com su - ID47 -\n<34>1 2003-10-11T22:14:15.003Z mymachine.example.com su - ID47 | is this an end marker? Either I'm misreading the RFC or there just isn't any mention. The sizes specified in the RFC just say what the minimum length is expected that I can work with...

    Read the article

  • Error at the end of APC install

    - by cinqoTimo
    I need to get APC running for a Drupal install of mine. I found a fairly concise guide at http://blog.4rev.net/2009-09/installing-apc-accelerator-into-php5-fedora-core-11/ for installing on FC11, only, I am using FC12. I figured I would give it a shot. I was able to run the following commands successfully - and yum installed fc12 versions of everything in the FC11 guide. yum install php-pear yum install php-devel httpd-devel yum groupinstall ‘Development Tools’ yum groupinstall ‘Development Libraries’ Then, I tried pecl install apc. Everything looked good until to got to the end, where it outputted the following error. /var/tmp/APC/php_apc.c: In function ‘zif_apc_compile_file’: /var/tmp/APC/php_apc.c:881: warning: unused variable ‘eg_class_table’ /var/tmp/APC/php_apc.c:881: warning: unused variable ‘eg_function_table’ /var/tmp/APC/php_apc.c: At top level: /var/tmp/APC/php_apc.c:959: error: duplicate ‘static’ make: *** [php_apc.lo] Error 1 ERROR: `make' failed Some people have had success with installing apc-beta, but that didn't work for me.. Any suggestions? Is there something I missed that is critical in the FC12 version?

    Read the article

  • Javascript loading never completes on many sites

    - by Joe
    I recently moved country and have found that on many websites the page never finishes loading. In some cases, no content is ever displayed, but the loading will never time out. Loading Developer Tools in Chrome shows me that it is the Javascript files which never load. For example, this BBC article will never load compatability.js, though will load all the other JS files perfectly. Google Maps often fails to finish loading, meaning it's impossible to make searches. There seems to be no pattern to which files will fail to load (i.e. they don't come from the same CDN). I have tried Chrome, Safari and Firefox on OSX 10.8, and Chrome on my girlfriend's OSX 10.7. I have similar issues on the iPad. In many cases, if I can go to the mobile version of the page that seems to load fine. I have run the browsers in private mode, disabled plugins, updated flash, cleared the cache, flushed the DNS cache - though it would seem that if this is happening on other devices, none of this would work anyway. Is this an ISP issue? And if so, why would it be limited to certain JS files and not all? JS files from the same domain work fine, so I'm not really sure what I should be looking for.

    Read the article

  • How can I duplicate HBCD's XP boot loader with my MBR?

    - by Warpstone
    I'm stumped. I'm migrating a Win XP Lenovo T500 to an SSD: I copied the XP partition using EaseUS to the SSD. Aligned the boot sector using Gparted The MBR needs to be rebuilt (fair enough) However, all attempts to use the Windows Recovery console hang (both via a boot CD and even when the console was installed as a boot option). I've tried using a bunch of tools to rebuild/replace the MBR, but no dice. They all say the MBR has been fixed, but I cannot load Windows from the SSD. The HBCD's boot from windows option works just fine however. I'm confused as to what HBCD can do that my drive can't. How can I get that functionality on my SSD? Is it a MBR fix I can mirror? The SSD is extremely fast when I do use HBCD to boot up... but it would be nice to not need a token-based access to the machine! :) Note: I know, windows 7 may be worth a fresh install, but I'm trying to avoid the cost and hassle if possible.

    Read the article

  • VMware guest pauses when the host is idle - how do I keep it running?

    - by EMP
    I'm running VMWare Worstation 7 with Windows 7 x64 as guest, Windows XP x64 as host. Inside the guest I run a long-running console application, which prints out progress messages with timestamps on them. Sometimes I leave it running for several hours while I lock the host OS and don't touch the computer at all. When I come back I find that some time after I left it seems to have paused and automatically resumed: the console app hasn't made much progress and there's a large time gap in its progress messages. There's nothing relevant in the host event log, but in the guest Application event log I can see these messages around the time I left: A request to disable the Desktop Window Manager was made by process (VMware Tools Service) The Desktop Window Manager was unable to start because composition was disabled by a running application And later, around the time I returned, this shows up in the System log: The system time has changed to ?2012?-?01?-?12T06:36:46.921000000Z from ?2012?-?01?-?12T03:18:19.953079000Z. That seems to support my theory that it's VMware doing something and not Windows itself. The question is: how do I stop it doing that? I want my application to continue running. By the way, the power options are set to never sleep in both guest and host.

    Read the article

  • Changing the modified date of a message in Exchange 2010

    - by jgoldschrafe
    My organization is in the middle of a process to move their Exchange 2010 messaging system from one archiving platform to another. As part of this process, we need to restore all archived messages back into users' email accounts, and then let the new system import them again. The problem is that when the messages are dumped back, the modified date on the message is set to the date it was restored, which trips up message archiving and basically means nobody will have anything archived for six months. So you don't have to ask: no, our archiving platform only uses the modified timestamp on the message and cannot be altered to temporarily use the sent or received timestamp instead to determine whether to archive it. We and others have asked for the feature, but it doesn't exist right now. What we're looking for is a method to go through the user's mailbox and alter the modified timestamp of each message (or preferably received more than X months ago) to the received date of the message. We also don't want to spend more on this tool per user than we're spending on the archiving solution in the first place. We've run across a few tools that are something ridiculous like $25 per user. I don't think we're even paying close to that for Exchange and the archiving solution put together. Whatever we settle on should function on a live mailbox with no downtime. Playing around with PST imports and hacky little things like that isn't going to work. We're fine with programming/scripting, if anyone knows the best way through PowerShell, COM automation or some other way to best handle this.

    Read the article

  • Is there a way to log commands that a user runs in Windows 7?

    - by camster342
    I manage a large enterprise environment, and while we try to advise users not to, there are inevitably users that need to have local admin access to their machines. The problem is that some of these users like to "fiddle" and sometimes screw up their machines in "wonderful" ways. Is there an easy way to log what a user does on a machine, specifically in the command prompt? Maybe there is 3rd party tools I could use to log this information? With Linux that I used to use in past ages, you could look at a users bash history file to see what commands they have run. While I realise that specific log could also be altered by the user if they wanted to cover their tracks, that is the sort of log I'm looking for. If there are other ways I can also log what other system configuration type changes they make as well (not necessarily command line based), that's also useful. I know about event/system logs and so on, but that doesn't necessarily catch all the information I need to figure out how the user has buggered their machine this time.

    Read the article

  • Strange Photoshop Problem: Can not select, zoom, paint, option button 'locked'

    - by nikcub
    I have a very strange problem with Photoshop. I can not use any of the tools, since the cursor appears 'locked'. If I select v on my keyboard, it goes to the zoom tool, but the cursor does not change. If I select the paintbrush tool, I can only paint if I hold down the option key. This is what the cursor look like (I had to paint it since I couldn't capture it). It is a rectangle with two lines through it. I am running Photoshop CS4 on a Macbook Pro with Mac OS X 10.6.6. Using both the trackpad and an external Logitech MX5000 mouse I see the same issue. This started when I fired up Photoshop today for the first time in a while. I can't remember changing any options or doing anything that could cause this. Is it possible that the option key is somehow locked in place, or there is some equivalent of num lock on? Very strange problem, I would appreciate any help anybody can offer. Edit: To add, the icon remains the same within all the menu options - it never goes back to being just a normal mouse cursor. Also, right click works fine, and if I hold down option, the cursor goes back to normal and I can paint with it. I can't use Marquee, Lasso, Crop, Type etc. even with option held down. When I go into Bridge, it is the same icon.

    Read the article

  • troubleshooting really slow login on a (linux) machine

    - by Peeter Joot
    Within the last couple of weeks, any attempt to login to a specific linux server has gotten really slow. Once I've logged in, things appear to run without significant delay, but some other login like activities (like starting a new screen session) are slow. The machine's been rebooted a couple of times recently and that hasn't helped. , and it doesn't appear to be $PATH search (where $PATH can sometimes include bad NFS mounts), which I've seen historically in our environment. I've also tried completely removing my .profile/.bash*/... type of init files to rule out anything bad there. I also see slow login for at least one other userid on the system. One thing I've noticed is the following message when trying to exit from a screen terminal: Utmp slot not found -> not removed and am wondering if this is related (having a vague recollection that Utmp has something to do with login). Any idea what that message means, or how to fix it, and if it would be related? Failing that, what sort of problem determination tools are available to investigate what is slowing down this login process?

    Read the article

  • Synchronize folder on network, preserving hard links

    - by Waleed Hamra
    I have few computers using Windows XP Pro. I want to synchronize/back a folder from one machine, to another one. This far, It's a simple problem, and I've used FreeFileSync for such operations, with very satisfactory results. But, this all changes when hard links come into play. Today's folder contains lots of hard links, using such backup programs will result in hard links being treated as multiple files, and copied as such, greatly increasing folder size on destination, and defeating the purpose of using all these hard links in the first place. It gets more complicated when we take into consideration the fact that network shares on Windows DON'T expose hard linking facilities, meaning that running a hard-link-aware tool like rsync using --hard-links will be of no use. So my question, how can i backup my folder to the other computer, while preserving hard links? I don't mind installing 3rd party tools to do it, as obviously, the standard windows shares approach won't work... I am guessing there might be some tool that can be installed on both machines and works in a server/client mode? anyone has any idea how to do this?

    Read the article

  • Is there a monitoring software suite that will alert me if it has received no activity in a time period?

    - by matt b
    This might be a very basic question, but I am not very familiar with the exact features of Nagios versus Munin versus other monitoring tools. Let's say we have a process that needs to run daily for some very important infrastructure reasons. We've had cases where the process did not run or was otherwise down for a number of days before anyone noticed. I'd like to set up a system that will enable me to easily know when the daily run did not take place for some reason. I can set up this process to send an email on every successful run (or every failed run), but I do not trust that the people receiving this email would notice an absence of an "I'm OK" message. What I am envisioning is some type of "tripwire" service which this V.I.P. (very-important-process) can send a status message to each time it runs, whether successfully or not; and if the "tripwire" service has not received any word from the VIP within a configurable amount of time, it can then send an alert to someone. (The difference between what I envision and the first approach I outlined is a service that sends a message only in abnormal conditions, rather than a service that sends messages each day that the status is normal/OK). Can Nagios be set up to send an alert like this, if it has not heard from a certain service/device/process in N days? Is there another tool out there which does have this feature?

    Read the article

  • How to perform this Windows 7 permissions change on many files via GUI or command line

    - by hippietrail
    After using my external hard drive on another Windows 7 computer to tweak photos with Windows Live Photo Gallery then upload them to Facebook I found the modified images were now not visible on the original Windows 7 computer. I'm not sure if the things I tried to get it working subsequently changed anything, but I do know this is the sequence of actions that makes the permissions of the modified files match those of the unmodified files: Right click on broken image file, select "Properties" On the "Security" tab press the "Advanced" button In the "Permissions" tab press the "Continue" button with the shield icon on it Tick the box marked "Include inheritable permissions from this object's parent Click the "Remove" button to remove the only current entry "Type: Allow, Name: Administrators (XYZ\Administrators), Permission: Full control, Inherited From: OK on the "Permissions" tab. OK on the "Security" tab. Now this same procedure does not work at the folder level. It results in "access denied" dialogs. I'm looking for some way to perform this exact modification on all the images I edited on the other computer. I'm happy to use the Windows GUI in Explorer or any other included tools. I'm happy to use the Windows command line. I'd prefer not to use a third-party tool since I'd have to be satisfied it's not doing anything else. I'm not looking for a different way to change permissions to other settings to make an external drive full of photos editable on multiple computers. At least not in this question.

    Read the article

  • Disaster Recovery Standby Server

    - by user64300
    Hi, I work for a small business with 25 users and 2 servers. 1 server is the DC running Windows Server 2003/Exchange 2003. We want a reliable disaster recovery strategy for this server without having to spend a lot of money. We take regular backups but I have been advised that only an identical server will allow them to be restored easily. I'm trying to come up with a solution that means we don't have to buy two servers at twice the cost everytime we upgrade. I'm toying with the idea of upgrading our DC more frequently (say every 3 years) and then using the old server as the recovery server (temporarily - until we can source a replacement server). However, I won't know whether the backups will restore on the old server until I try it! We're planning to upgrade to Server 2008R2 in the near future so I'm hoping the backup tools will give me some success in restoring to different hardware (or perhaps I can use hyper-v if not). So what I am wondering is whether it is a idea to use old hardware as a disaster recovery strategy (providing we regular test it obviously!).

    Read the article

  • Push, parse & import "selected" data, text, info blobs from Webpages/ Emails as Event/ Appointment to standard Calendar directly or as .ics file?

    - by Alex S
    Any tool, plugin, extension, script/ code to push "selected" data, text, information blobs from Web pages, Emails etc, then parsed and imported to structured Event, Appointment (e.g. .ics) on a standard Calendar like Outlook, Google, iCal? If not, what and how could I use some scripting, coding or existing tools, extensions to add on top and do this. I come across a lot of unstructured information on Webpages, Emails, FB events etc. where I just want to add that information to my Calendar. Instead of entering all the information by hand all the time, there should be an easy enough way to have the information get parsed, organized and imported to a Calendar... Either directly to a calendar from source or Translated to a standard format such as .ICS that can be imported & saved easily. Would love to see some suggestions for this incorporating one or more of the following: on Windows with Chrome & Outlook on iPhone/ iPad to its Calendar PS: I'll come back and see if I can add more information to this question and to answer it as well. I have not found a solution yet.

    Read the article

  • CPU usage always below 10% in windows server 2008 r2 x64

    - by ???
    I am using a server with windows server 2008 r2 running on it to run my program. The CPU of the server is Intel xeon x5570 2.93GHz with 2 processors, 8 cores per processer. However, I found that the cpu usage is almost always below 10% even I use 32 threads in my program. And I also found that sometimes the cpu usage could reach as high as 93% through the task manager when running my program and at that moment my program has processed over 1000 files per second while normally, it only processed over 50 files per second. However, this does not happen often. I use tools downloaded from the internet to make sure no core sleeps when the server is on, nothing changed. Also, I edited the windows register to make sure that I, as an administer, have no cpu usage limit. But it changed nothing. Is there anyway that I can make full use of my cpu? That is to say that each core runs a thread of my program and the total cpu usage could reach over 50% when I use a reasonable number of threads in my program. Did this happen to anyone of you? And could you help me with this ? Thank you!

    Read the article

< Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >