Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 496/880 | < Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >

  • How to Hibernate from .NET Apps and How to enable Hibernate in Windows XP

    The usage of Computer desktop or laptop is increased all around the world phenomenally. This link gives you the picture on how power consumption is for various devices we use daily. to reduce the power consumption Hibernate is one of the best way provided by default in Windows Vista or Windows 7. Hibernate feature enables you to close the machine without closing your applications, that means the applications will be restored as they were once we restart the machine. Hibernate feature is not enabled in Windows XP by default. I’ve seen many people that they run (do not switch off) the machines months and months as they do not want to close the windows or applications running in Windows XP. below are the steps to enable Hibernate in Windows XP. Right click on Desktop. Click on properties. Go to screen save tab. Click on power button Select Hibernate tab Check the checkbox “Enabled Hibernate” Apply the settings. Now when you try to shutdown, “Shut down windows” dialog shows “Hibernate” options. Now you can safely close the machine without closing your applications or windows as they will be restored once you on the machine. </SPAN? Some time you might want to provide this future programmatically for the applications you develop for windows. Generally you might want to provide this option in windows applications where process needs huge time. Download managers are the one of the best example. below is the code to do a Hibernate from the .NET code. using System.Windows.Forms;namespace CodeKicks.WinApp.Machine{ public static class MyMachineHelper { public static void DoHibernate() { //Application.SetSuspendState(PowerState.Suspend, true, false); Application.SetSuspendState(PowerState.Hibernate, true, false); } }} span.fullpost {display:none;}

    Read the article

  • Windows Azure Virtual Machines - Make Sure You Follow the Documentation

    - by BuckWoody
    To create a Windows Azure Infrastructure-as-a-Service Virtual Machine you have several options. You can simply select an image from a “Gallery” which includes Windows or Linux operating systems, or even a Windows Server with pre-installed software like SQL Server. One of the advantages to Windows Azure Virtual Machines is that it is stored in a standard Hyper-V format – with the base hard-disk as a VHD. That means you can move a Virtual Machine from on-premises to Windows Azure, and then move it back again. You can even use a simple series of PowerShell scripts to do the move, or automate it with other methods. And this then leads to another very interesting option for deploying systems: you can create a server VHD, configure it with the software you want, and then run the “SYSPREP” process on it. SYSPREP is a Windows utility that essentially strips the identity from a system, and when you re-start that system it asks a few details on what you want to call it and so on. By doing this, you can essentially create your own gallery of systems, either for testing, development servers, demo systems and more. You can learn more about how to do that here: http://msdn.microsoft.com/en-us/library/windowsazure/gg465407.aspx   But there is a small issue you can run into that I wanted to make you aware of. Whenever you deploy a system to Windows Azure Virtual Machines, you must meet certain password complexity requirements. However, when you build the machine locally and SYSPREP it, you might not choose a strong password for the account you use to Remote Desktop to the machine. In that case, you might not be able to reach the system after you deploy it. Once again, the key here is reading through the instructions before you start. Check out the link I showed above, and this link: http://technet.microsoft.com/en-us/library/cc264456.aspx to make sure you understand what you want to deploy.  

    Read the article

  • mdadm raid5 recover double disk failure - with a twist (drive order)

    - by Peter Bos
    Let me acknowledge first off that I have made mistakes, and that I have a backup for most but not all of the data on this RAID. I still have hope of recovering the rest of the data. I don't have the kind of money to take the drives to a recovery expert company. Mistake #0, not having a 100% backup. I know. I have a mdadm RAID5 system of 4x3TB. Drives /dev/sd[b-e], all with one partition /dev/sd[b-e]1. I'm aware that RAID5 on very large drives is risky, yet I did it anyway. Recent events The RAID become degraded after a two drive failure. One drive [/dev/sdc] is really gone, the other [/dev/sde] came back up after a power cycle, but was not automatically re-added to the RAID. So I was left with a 4 device RAID with only 2 active drives [/dev/sdb and /dev/sdd]. Mistake #1, not using dd copies of the drives for restoring the RAID. I did not have the drives or the time. Mistake #2, not making a backup of the superblock and mdadm -E of the remaining drives. Recovery attempt I reassembled the RAID in degraded mode with mdadm --assemble --force /dev/md0, using /dev/sd[bde]1. I could then access my data. I replaced /dev/sdc with a spare; empty; identical drive. I removed the old /dev/sdc1 from the RAID mdadm --fail /dev/md0 /dev/sdc1 Mistake #3, not doing this before replacing the drive I then partitioned the new /dev/sdc and added it to the RAID. mdadm --add /dev/md0 /dev/sdc1 It then began to restore the RAID. ETA 300 mins. I followed the process via /proc/mdstat to 2% and then went to do other stuff. Checking the result Several hours (but less then 300 mins) later, I checked the process. It had stopped due to a read error on /dev/sde1. Here is where the trouble really starts I then removed /dev/sde1 from the RAID and re-added it. I can't remember why I did this; it was late. mdadm --manage /dev/md0 --remove /dev/sde1 mdadm --manage /dev/md0 --add /dev/sde1 However, /dev/sde1 was now marked as spare. So I decided to recreate the whole array using --assume-clean using what I thought was the right order, and with /dev/sdc1 missing. mdadm --create /dev/md0 --assume-clean -l5 -n4 /dev/sdb1 missing /dev/sdd1 /dev/sde1 That worked, but the filesystem was not recognized while trying to mount. (It should have been EXT4). Device order I then checked a recent backup I had of /proc/mdstat, and I found the drive order. md0 : active raid5 sdb1[0] sde1[4] sdd1[2] sdc1[1] 8790402048 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] I then remembered this RAID had suffered a drive loss about a year ago, and recovered from it by replacing the faulty drive with a spare one. That may have scrambled the device order a bit...so there was no drive [3] but only [0],[1],[2], and [4]. I tried to find the drive order with the Permute_array script: https://raid.wiki.kernel.org/index.php/Permute_array.pl but that did not find the right order. Questions I now have two main questions: I screwed up all the superblocks on the drives, but only gave: mdadm --create --assume-clean commands (so I should not have overwritten the data itself on /dev/sd[bde]1. Am I right that in theory the RAID can be restored [assuming for a moment that /dev/sde1 is ok] if I just find the right device order? Is it important that /dev/sde1 be given the device number [4] in the RAID? When I create it with mdadm --create /dev/md0 --assume-clean -l5 -n4 \ /dev/sdb1 missing /dev/sdd1 /dev/sde1 it is assigned the number [3]. I wonder if that is relevant to the calculation of the parity blocks. If it turns out to be important, how can I recreate the array with /dev/sdb1[0] missing[1] /dev/sdd1[2] /dev/sde1[4]? If I could get that to work I could start it in degraded mode and add the new drive /dev/sdc1 and let it resync again. It's OK if you would like to point out to me that this may not have been the best course of action, but you'll find that I realized this. It would be great if anyone has any suggestions.

    Read the article

  • Inconsistent movement / line-of-sight around obstacles on a hexagonal grid

    - by Darq
    In a roguelike game I've been working on, one of my core design goals has been to allow the player to "Play the game, not the grid." In essence, I want the player's positioning to be tactical because of elements in the game world, not simply because some grid tiles are more advantageous than others, in relation to enemies. I am fine with world geometry not being realistic, but it needs to be consistent. In this process I have ran into most of the common problems (Square tiles? Diagonal movement, LOS, corner cases, etc.) and have moved to a hexagonal tile grid. For the most part this has been great, and I've not had too many inconsistencies. Recently however I have been stumped by the following: Points A and B are both distance 4 from the player (red lines). Line-of-sight to both are blocked by walls (black tiles). However, due to the hexagonal grid, A can be reached in 4 moves, whereas B requires 5 moves (blue lines). On a hex grid, "shortest path" seems divorced from "direct path", there may be multiple shortest paths to any point, but there is only one direct path (or two in some situations). This is fine, geometry need not be realistic. However this also seems inconsistent, similar obstacles are more effective in some positions than in others. A player running away from an enemy should be able to run in any direction, increasing the distance between the two actors. However when placing obstacles or traps between themselves and enemies, the player is best served by running in one of the six directions that don't have multiple shortest paths. Is there a way to rationalise this? Am I missing something that makes this behaviour consistent? Or is there a way to make this behaviour consistent? I am most certainly over-thinking this, but as it is one of my goals, I should do it due diligence.

    Read the article

  • BPM 11gR1 now available on Amazon EC2

    - by Prasen Palvankar
    BPM 11gR1 now available on Amazon EC2The new Oracle BPM 11gR1, including the latest Oracle SOA Suite 11gR1 Patchset-2 is now available as an Amazon Machine Image (AMI). This is a fully configured image which requires absolutely no installation and lets you get hands on experience with the software within minutes. This image has all the required software installed and configured and includes the following:Oracle 11g Database Standard Edition Oracle SOA Suite 11gR1 Patch-set 2Oracle BPM 11gR1Oracle Webcenter with BPM Process SpacesOracle Universal Content ManagementOracle JDeveloper with SOA and BPM pluginsNote: Use of this AMI requires acceptance of Oracle Technology Network (OTN) terms of use.To use this AMI, follow these steps: Login to your Amazon account and browse to Amazon AWS Console. If this is the first time you are using Amazon Web Services please visit https://aws.amazon.com/ec2/ for information on Amazon Elastic Cloud Computing and how to get started with Amazon EC2Make sure your security group that you will be using to launch the instance allows the following ports to be opened:22 (SSH)1521, 7001, 8001, 8888, 9001Click on AMIsChange the Viewing filters to 64-bit and enter soa-bpm in the search box. You should see the following AMI:083342568607/oracle-soa-bpm-11gr1-ps2-4.1-pubSelect the AMI and click on Launch or Spot Request. For more information on spot requests, please visit the Amazon EC2 link aboveAccept all the defaults and launch the instanceWhen the instance state changes to running, copy the assigned public host name and connect to it using either PuTTY or SSH command. For PuTTY usage, refer to this document.Once you are connected to the instance using PuTTY or SSH, you will be presented with the terms of use.Accept the terms of use to proceed. This will prompt you to set passwords for your oracle OS login as well as for VNC. Note that the instance will not be usable until you have accepted the terms of use.The instance is now ready to use. The SOA/BPM and other servers are automatically started once you accept the term of use. Initial startups can take about 5-10 minutes.If you would like to use the JDeveloper installed in the AMI, you can access it either using VNC or NX. You can get the NX client from NoMachine./home/oracle/README.txt contains all the URLs that you can use to access the Enterprise Manager, BPM Composer, BPM Workspace, Webcenter etc.

    Read the article

  • Releasing an open source project without getting embarrassed

    - by Hopeful
    I've been working by myself on a fairly large open source project for quite a while and it's nearing the point where I'd like to release it. However, I'm self-taught and I don't really know anyone who could adequately review my project. A few years ago, I had released a small bit of code which pretty much got ripped apart (in a critical sense) on the forum where I released it. Even though the code worked, the criticism was accurate but brutal. It prompted me to begin searching for best practices for everything and in the end I feel that it made me a much better developer. I've gone over everything in my project so many times trying to make it perfect that I've lost count. I believe in my project and think it has the potential to help a lot of people and I feel like I've done some cool things in interesting ways with it. Still, because I'm self-taught, I can't help but wonder what gaps exist in my self-education. The way my code was ripped apart last time isn't something I'd like to repeat. I think my two biggest fears with releasing my project that I've poured countless hours into are being absolutely embarrassed because I missed some patently obvious things because of my self-education or, worse, releasing it to the sound of crickets. Is there anyone who has been in a similar situation? I'm not afraid of constructive criticism, so long as it is constructive and not just a rant on how I screwed up. I know there is a code review site on StackExchange, but it's not really set up for large projects and I didn't feel like the community there is large enough yet to get good feedback if I were to post parts of my project piecemeal (I tried with one file). What can I do to give my project at least some measure of success without getting embarrassed or devestated in the process?

    Read the article

  • Netbook performs hard shutdown without warning on low battery power

    - by Steve Kroon
    My Asus EEE netbook performs a hard shutdown when it reaches low battery power, without giving any warning - i.e. the power just goes off, without any shutdown process. I can't find anything in the syslog, and no error messages are printed before it happens. I've had this problem on previous (K)Ubuntu versions, and hoped updating to Ubuntu Precise would help resolve the issue, but it hasn't. The option in the Power application for "when power is critically low" is currently blank - the only options are a (grayed-out) hibernate and "Power off". I have re-installed indicator-power to no effect. The time remaining reported by acpi is unstable, as is the time remaining reported by gnome-power-statistics. (For example, running acpi twice in succession, I got 2h16min, and then 3h21min remaining. These sorts of jumps in the remaining time are also in the gnome-power-statistics graphs.) It might be possible to write a script to give me advance warning (as per @RanRag's comment below), but I would prefer to isolate why I don't get a critical battery notification from the system before this happens, so that I can take action as appropriate (suspend/shutdown/plug in power) when I get a notification. Some additional information on the battery: kroon@minia:~$ upower -i /org/freedesktop/UPower/devices/battery_BAT0 native-path: /sys/devices/LNXSYSTM:00/device:00/PNP0A08:00/PNP0C0A:00/power_supply/BAT0 vendor: ASUS model: 1005P power supply: yes updated: Fri Aug 17 07:31:23 2012 (9 seconds ago) has history: yes has statistics: yes battery present: yes rechargeable: yes state: charging energy: 33.966 Wh energy-empty: 0 Wh energy-full: 34.9272 Wh energy-full-design: 47.52 Wh energy-rate: 3.7692 W voltage: 12.61 V time to full: 15.3 minutes percentage: 97.248% capacity: 73.5% technology: lithium-ion History (charge): 1345181483 97.248 charging 1345181453 97.155 charging 1345181423 97.062 charging 1345181393 96.970 charging History (rate): 1345181483 3.769 charging 1345181453 3.899 charging 1345181423 4.061 charging 1345181393 4.201 charging kroon@minia:~$ cat /proc/acpi/battery/BAT0/state present: yes capacity state: ok charging state: charging present rate: 332 mA remaining capacity: 3149 mAh present voltage: 12612 mV kroon@minia:~$ cat /proc/acpi/battery/BAT0/info present: yes design capacity: 4400 mAh last full capacity: 3209 mAh battery technology: rechargeable design voltage: 10800 mV design capacity warning: 10 mAh design capacity low: 5 mAh cycle count: 0 capacity granularity 1: 44 mAh capacity granularity 2: 44 mAh model number: 1005P serial number: battery type: LION OEM info: ASUS

    Read the article

  • Hybrid USB Install Method - netboot and iso

    - by Samus Arin
    I was following the steps here ("Preparing Files for USB Memory Stick Booting") https://help.ubuntu.com/10.04/installation-guide/i386/boot-usb-files.html to create a installation usb drive for 12.1. The very first paragraph of the article states "The second is to also copy a CD image onto the USB stick and use that as a source for packages, possibly in combination with a mirror." However, the only instructions mentioned regarding an iso image is to simply copy one somewhere on the drive (after its been made bootable and syslinux, vmlinuz and initrd.gz installed/copied): "you should now copy an Ubuntu ISO image onto the stick." I thought it strange there where no configuration steps for "pointing" the kernel to the iso (like a line in syslinux.cfg or a boot: option or something), but went ahead with the install anyway. I don't think the iso was used at all, it appeared that all the OS files where downloaded during the install process. Therefore, I was wondering if anyone knew how to use this local iso image in this particular installation technique (I know the image can be installed with dd, but thats a different technique), b/c I need to reinstall (I installed unity, but it's wayy to much for my little Atom based netbook) ? Thank you.

    Read the article

  • DB2 on SPARC T3 Tuning Tips

    - by cherry.shu(at)oracle.com
    With the new self tuning feature in DB2 V9.x, a lot of database parameters are set to automatic in DB2 v9.7 by default so that DB2 can adjust the values as needed. Most should work fine without manual tweaks. But for transaction workload on SPARC T3 systems, two parameters need to be adjust manually to achieve optimal performance. DATABASE_MEMORY: When this parameter is set to AUTOMATIC and SELF_TUNING_MEM is set to ON, DB2 will allocate small page size (64KB) for all memory allocation, and expands and shrinks the memory as needed. In order to take advantage of the large page size (up to 256MB) supported by the SPARC T3, we need to manually set the size of the DATABASE_MEMORY so that DB2 can use 256MB page size for its buffer pools which are implemented as ISM segments. I know this sounds strange as it seems that you turn a switch and it ends up controlling another function. pmap(1M) output can verify the page sizes used by DB2 db2sysc process. NUM_IOCLEANERS: This parameter defines the number of page cleaners. The default value of this parameter is AUTOMATIC, which is calculated based on the number of available CPUs and the number of logical partitions. On a SPARC T3 system where there are over a hundred of virtual CPUs and single DB2 partition, DB2 would set it to #CPUs - 1. This would lead to too many page cleaners to compete flushing to disks and cause aio mutex lock contentions. So we need to decrease the value for it. The good practice is to set the value to the number of physical devices that are used by the database table space containers.

    Read the article

  • Dual booting Windows and Arch Linux (with GRUB2) - after using Windows, Windows Boot Manager made first in boot priority list

    - by louis058
    I am dual booting Windows 7 and Arch Linux (both 64bit), with GRUB2, using the 64-bit EFI version. I partitioned my drive into a GPT drive and installed Windows first according to this guide. I then installed Arch Linux using the Beginner's Guide, installing grub2-efi-x86_64 in the process. Everything is working fine now, but with one problem. I can set the boot priority in BIOS (or is it UEFI?) to have GRUB boot try and boot before Windows Boot Manager. Then I chainload Windows Boot Manager using GRUB. However, when I actually use Windows in this manner, upon shutting down and turning on again, or rebooting, Windows seems to set Windows Boot Manager first in the priority list again, with the result being I have to manually set GRUB again, or I can't boot into Linux. My motherboard is an Asrock H61M/USB3, if that helps. I want to know how to turn off this behaviour.

    Read the article

  • General video performance affected on Mac OSX 10.5 (PowerBook G4)

    - by r0ca
    Hi all, I'm quite new to Mac and I just got a PowerBook G4 for free. I installed OSX 10.5 on it and for the first two weeks, everything was going kinda smooth even if this is similar to a P3. I'm not expecting awsome video performance but at least be able to watch some videos from Youtube. Yesterday night, I installed Office 2008 for mac and this morning, even after a reboot, my computer is way much slower that I used to know. I watched a youtube video and the framerate was 1:1. I also noticed it on flash adds, it's way slower! Is there anything that I can do to increase video performance, see what's the process list running and taking more GPU or CPU, what's taking more ram and stuff like that?! What do you guys, Mac pros, would do on an old laptop with OSX 10.5 Thanks!

    Read the article

  • Error installing avogadro with CMake 'lconvert: could not exec No such file or directory'

    - by Orr22
    I'm brand new in ubuntu. I'm trying to install Avogadro. The program need the following packages, which I could install: CMake - OpenBabel 2.3.2 - Qt4 - Git - Eigen2. Here it is the recepy to install the : cd $HOME/src git clone git://github.com/cryos/avogadro.git mkdir -p $HOME/build/avogadro cd $HOME/build/avogadro cmake $HOME/src/avogadro make -j2 sudo make install It was unable to compile, but when I skipped the 'git clone' step it seemed to work just fine. After several stops during the CMake compiling process (software actualizations, get Doxygen, get flex, get bison) I was able to compile. But when I introduce the 'make -j2' command the installation stops as follows: Orr22@javi-87:~/build_avogadro$ make -j2 [ 0%] Built target elementcolor [ 0%] Built target bsdyengine [ 2%] Built target spglib [ 3%] Built target navigatetool [ 4%] Built target tubegen [ 4%] Generating libavogadro_hu.qm [ 6%] Built target OpenQube [ 6%] Generating moc_animation.cxx lconvert: could not exec '/usr/lib/i386-linux-gnu/qt5/bin/lconvert': No such file or directory make[2]: *** [libavogadro/src/libavogadro_hu.qm] Error 1 make[2]: *** Se espera a que terminen otras tareas.... make[1]: *** [libavogadro/src/CMakeFiles/avogadro.dir/all] Error 2 make: *** [all] Error 2 Any suggestions to proceed? Thanks in advance, Orr22

    Read the article

  • XOLO X900–First mobile phone with Intel Power

    - by Rekha
    XOLO X900, XOLO’s offering the world’s first smart phone with the power of Intel inside® shaking hands with LAVA International Ltd., India’s fastest growing handset brands. The R&D Centre is in Shenzhan (China) and Bangalore (India). The smart phone has a fast web browsing with the 1.6 GHz Intel processor and smooth multi-tasking process using Intel patented Hyper Threading technology.It has an optimum battery usage, 4.03” hi-resolution of 1024X600 pixels LCD screen to ensure crisp text and vibrant images, HDMI Output port for TV, full HD 1080p playback and dual speakers. It has a camera of 8MP HD camera with certain DSLR like features allowing to click upto 10 photos in less than a second. 3D and HD gaming is immensely realistic with 400 MHz Graphics Processing Unit. The Operating System used here is Android 2.3 (Gingerbread) and upgradable to Android 4.0. It has the GPS facility and rear and front cameras with 8MP and 1.3MP respectively.  They have enabled Accelerometer, Gyroscope, Magnetometer, Ambient light sensor and Proximity sensor in this smart phone. Intel’s smartphone venture is beginning in India first. It is said to be available for sale in Indian from April 23, 2011 onwards. The price is at a best-buy price of INR 22,000 approximately. The smartphone will be available at the Indian retail chain Croma. The phone will available in other retail stores and online stores from early May. The company is launching the smartphone in India first and a more powerful handset in China later this year. According to their success in India and China, Intel is planning to come into Europe and US market. Till then, Intel smartphones are only for Indian buyers. You can more technical information from the XOLO’s site.

    Read the article

  • dpkg behaving strangely?

    - by Tom Henderson
    When I use apt to get a package, I have been receiving the same error message. Here is an example trying to install wicd (which is already installed): Reading package lists... Building dependency tree... Reading state information... wicd is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 3 not fully installed or removed. After this operation, 0B of additional disk space will be used. Setting up tex-common (2.06) ... debconf: unable to initialize frontend: Dialog debconf: (Dialog frontend requires a screen at least 13 lines tall and 31 columns wide.) debconf: falling back to frontend: Readline Running mktexlsr. This may take some time... done. No packages found matching texlive-base. dpkg: error processing tex-common (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of texlive-binaries: texlive-binaries depends on tex-common (>= 2.00); however: Package tex-common is not configured yet. dpkg: error processing texlive-binaries (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of dvipng: dvipng depends on texlive-base-bin; however: Package texlive-base-bin is not installed. Package texlive-binaries which provides texlive-base-bin is not configured yet. dpkg: error processing dvipng (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: tex-common texlive-binaries dvipng E: Sub-process /usr/bin/dpkg returned an error code (1) I'm not sure if this is a problem with apt or with dpkg, but it certainly doesn't look good!

    Read the article

  • Installing SQL Server 2005 Express on Windows 8 [closed]

    - by Angel
    We have an application that installs a custom instance of Microsoft SQL Server 2005 Express as part if the whole installation process. Microsoft states that SQL Server 2005 Express is not compatible with Windows 8, but in reality it seems to install and work perfectly fine. The only problem is that during the installation a dialog appears saying it's not compatible, and offers options to get help online, continue with the installation anyway, or cancel. If you chose to continue anyway on all these incompatibility prompts, then the SQL server instance is installed without any problem whatsoever. Does anyone know if there is a way to suppress these incompatibility messages during the SQL service installation (or any installation, for that matter)?

    Read the article

  • TeamCity NuGet private feed - Credentials

    - by Gaui
    I installed TeamCity and enabled NuGet server, both Authenticated Feed and Public Feed. When I try to push packages to the server with the following command: > nuget push package.nupkg [API-Key-here] -s http://myserver/httpAuth/app/nuget/v1/FeedService.svc/ I get the following prompt: Please provide credentials for: http://myserver/httpAuth/app/nuget/v1/FeedService.svc/ And asks me for both "UserName" and "Password". I've tried entering credentials for TeamCity administrator and Windows administrator, but nothing works. So I tried pushing to the Public Feed with the following command: > nuget push package.nupkg [API-Key-here] -s http://myserver/guestAuth/app/nuget/v1/FeedService.svc/ Then I get the following: Failed to process request. 'Method Not Allowed'. The remote server returned an error: (405) Method Not Allowed.. Regarding the Authenticated Feed, what credentials are they and where do I specify them and why is the Public Feed not working?

    Read the article

  • A better way to encourage contributions to OSS

    - by Daniel Cazzulino
    Currently in the .NET world, most OSS projects are available via a NuGet package. Users have a very easy path towards *using* the project right away. But let’s say they encounter some isssue (maybe a bug, maybe a potential improvement) with the library. At this point, going from user to contributor (of a fix, or a good bug repro or even a spike for a new feature) is a very steep and non trivial multi-step process of registering with some open source hosting site (codeplex, github, bitbucket, etc.), learning how to grab the latest sources, build the project, formulate a patch (or fork the code), learn the source control software they use (mercurial, git, svn, tfs), install whatever tools are needed for it, read about the contributors workflow for the project (do you fork &amp; send pull requests? do you just send a patch file? do you just send a snippet? a unit test? etc.), and on, and on, and on. Granted, you may be lucky and already know the source control system the project uses, but in really, I’d say the chances are pretty low. I believe most developers *using* OSS are far from familiar with them, much less with contributing back to various projects. We OSS devs like to be on the cutting edge all the time, ya’ know, always jumping on the new SCC system, the new hosting site, the new agile way of managing work items, bug tracking, code reviews, etc. etc. etc.. But most of our OSS users are largely the “... Read full article

    Read the article

  • Applescript to connect to bluetooth device

    - by Mark117
    I'm trying to create an applescript to allow me to connect to a bluetooth device by it's Bluetooth ID. So far, i've managed to get an applescript to turn on bluetooth if it's off. Here's the code: # This is only necessary, if AppleScripts are not yet allowed to change checkboxes tell application "System Events" to set UI elements enabled to true # Now change the bluetooth status tell application "System Preferences" set current pane to pane id "com.apple.preferences.bluetooth" tell application "System Events" tell process "System Preferences" # Enabled is checkbox number 2 if value of checkbox 2 of window "Bluetooth" is 0 then click checkbox 2 of window "Bluetooth" end if end tell end tell quit end tell Would someone know if and how it's possible to set up a new bluetooth device and if it'd be posible to connect to a device based on it's device name/ it's device bluetooth ID? I've also tried to record the action in Automator but for the "set up new device" option, Automator just tells me: "click on "" button". Thanks

    Read the article

  • Failed to import SLES11 to SCOM2012

    - by siyang
    I try to the import my SLES11 to SCOM 2012, but failed. Below is my simple steps and failed error message. Import SLES11 mp DNS resolve both SCOM and SLES11 Install SCOM Agent and cert. Generated the cert from SLES11 then import it to SCOM, and use scxcertconfig -sign to refresh the cert, and back it to SLES11.(I restart the scxadmin on SLES11, and reboot the SCOM ) 5. On SCOM try to find the SLES11, but failed. Below is the error message. *Message: Certificate signing was not successful. Detials: Standard Output: Failed to start child process '/etc/init.d/scx-cimd' errno=13RETURN CODE:1 Standard Error: Exception Message:*

    Read the article

  • Independent SharePoint Trainer in DC ~ I conduct, teacher-led SHAREPOINT user training anywhere ~

    - by technical-trainer-pro
    Your options: "*interactive" hands-on VIRTUAL or CLASSROOM style training to all SharePoint Users & Site Admin owners.* I also develop customized classes tailored to the specific design of any SharePoint Site - acting as the translator for those left to understand and use it, on an everyday basis. Audience: users,clients,stakeholders,trainers Areas: functionality,operations,management, user site customization,ITIL training, governance process,change mangement and industry or client specific scenerios. INDIVIDUAL RATE- $300 to join any class *(1)* GROUP RATE - $1500 for a private group of (6-10) Flexible Scheduling contact me : [email protected] Local to DC/MD/VA ---can train hands-on anywhere~

    Read the article

  • Issues running java in Solaris 9 container

    - by Matthew Watson
    Hi, I have a solaris 9 container built from a physical server using flarcreate. Everything seems fine, except when trying to trying to run any "java -server" process it fails with the following error This is on a Sunfire T1000 machine running Solaris 10 10/09 s10s_u8wos_08a SPARC Running jdk1.5.0_15 Exception java.lang.OutOfMemoryError: requested -4 bytes for size_t in /BUILD_AREA/jdk1.5.0_15/hotspot/src/os/solaris/vm/os_solaris.cpp. Out of swap space? As far as I can tell I'm not actually out of swap space. Running java in client mode works without a problem. Googles only suggestion is related to x86. Any suggestions? Thanks.

    Read the article

  • Free internet radio station server software with remote broadcasting?

    - by Zachary Brown
    I am in the process of creating an internet radio station, but the two djs I have for it are not able to be in one place. I need for them to be able to login to a web based broadcasting session that still has full functionality. They need to be able to broadcast thier live shows with talk and music. The music will be stroed on the server. I have checked out the Broadwave media streaming server from NCH, but ti does not have the ability to login as a dj from a remote computer. I don't have any money for this, so I need it to be free. If this is not possible, I need it to be cheap!

    Read the article

  • What is the rationale behind Apache Jena's *everything is an interface if possible* design philosophy?

    - by David Cowden
    If you are familiar with the Java RDF and OWL engine Jena, then you have run across their philosophy that everything should be specified as an interface when possible. This means that a Resource, Statement, RDFNode, Property, and even the RDF Model, etc., are, contrary to what you might first think, Interfaces instead of concrete classes. This leads to the use of Factories quite often. Since you can't instantiate a Property or Model, you must have something else do it for you --the Factory design pattern. My question, then, is, what is the reasoning behind using this pattern as opposed to a traditional class hierarchy system? It is often perfectly viable to use either one. For example, if I want a memory backed Model instead of a database-backed Model I could just instantiate those classes, I don't need ask a Factory to give me one. As an aside, I'm in the process of writing a library for manipulating Pearltrees data, which is exported from their website in the form of an RDF/XML document. As I write this library, I have many options for defining the relationships present in the Peartrees data. What is nice about the Pearltrees data is that it has a very logical class system: A tree is made up of pearls, which can be either Page, Reference, Alias, or Root pearls. My question comes from trying to figure out if I should adopt the Jena philosophy in my library which uses Jena, or if I should disregard it, pick my own design philosophy, and stick with it.

    Read the article

  • Running PHP 5.2 FastCGI + Apache on CentOS 5 issue

    - by Goran
    I am trying to setup 2 versions of PHP on Centos 5.9 using this tutorial: http://linuxplayer.org/2011/05/intall-multiple-version-of-php-on-one-server. I have followed I have installed default 5.4.19, and I was trying to setup another 5.2.17 PHP version to be run with Fast CGI and I followed the second part completely. However, when I try to run http://web2.example.com it returns 500 error message. In the apache log there are only 2 lines that repeat: [notice] mod_fcgid: call /var/www/web2/index.php with wrapper /usr/local/php52/bin/fcgiwrapper.sh and [notice] mod_fcgid: process /var/www/web2/index.php(25250) exit(server exited), terminated by calling exit(), return code: 255 Please note that I had to add .php at the and of the FCGIWrapper because apache would not start without it: FCGIWrapper /usr/local/php52/bin/fcgiwrapper.sh .php Also please note that http://web1.example.com with PHP 5.4.19 is working absolutely fine. Please help. Thank you very much in advance.

    Read the article

  • Does Juniper Networks provide keyloggers with their software?

    - by orokusaki
    I noticed that I had a "USB Mass Storage Device" plugged in when there wasn't in fact anything plugged in to any USB port. I turned it off via Windows (XP), but it's quite concerning. This was after installing Juniper Networks' software for VPN access to an IT guy's stuff. I also notice there is a service called "dsNcService.exe" which apparently is sending information over the internet (even when I'm not in VPN access). The process restarts itself when I end it. Should I be worried that this software is tracking my keystrokes and broadcasting them to my IT guy?

    Read the article

< Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >