Search Results

Search found 25534 results on 1022 pages for 'write powershell'.

Page 554/1022 | < Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >

  • Wordnik Accelerator

    - by prabhpreet
    Wow, creating IE Accelerators is superbly easy. If you want to learn how to create one, go here (some MSDN blog) and the MSDN documentation (clearly written). I was fed up of dictionary.com bringing all those popups and the stupid definitions of Google's dictionary. So I decided to scratch my own itch. I randomly stumbled on the site called Wordnik and it provides with all examples plus definitions plus lots more for words and its popup-free (as far as I know). So I decided to write and accelerator. Here is the source code (Yes, this is it): <?xml version="1.0" encoding="utf-8"?> <os:openServiceDescription xmlns:os="http://www.microsoft.com/schemas/openservicedescription/1.0"> <os:homepageUrl>http://www.wordnik.com</os:homepageUrl> <os:display> <os:name>View on Wordnik</os:name> <os:description>Looking up words on an awesome word site called Wordnik </os:description> <os:icon>http://www.wordnik.com/favicon.ico</os:icon> </os:display> <os:activity category="Define"> <os:activityAction context="selection"> <os:execute method="get" action="http://www.wordnik.com/words/{selection}" ></os:execute> </os:activityAction> </os:activity> </os:openServiceDescription> That’s it. To get it, go here. Enjoy!

    Read the article

  • Storage disk format to use to be utilised by all OS

    - by ClothSword
    Aiming to have a large internal HDD to store files, and several OSes on separate disks. They all need to read and write to one storage drive. Ubuntu (13.04+) Mac OSX (10.8+) Windows (7+) What format should the drive be? Would like to avoid buying third party software, here's what I have discerned so far: NTFS - Can't be written to by Mac without buying third party software? ext3 - Windows can't read, third party software in development. Mac has OSXFUSE HFS+ - Buy third party and/or faff around to get working on windows exFAT - Cross platform, but breaks MS patents?

    Read the article

  • Converting a DWG/DXF to CSV or Excel

    - by Menno Gouw
    I'm using ZWcad and i need to get the coordinates of hundreds of blocks into a excel sheet or .CSV file so i can import that into the GPS hardware. I know there are plenty of tools for autocad, i probably can even write one myself but as far as ZWcad goes i seem to be out of options. However ZWcad saves to DWG too, and exports to all the other familiar cad extensions. So i was wondering if i would just save the blocks i need to export to a certain file there might be a tool/program to convert that directly into .CSV.

    Read the article

  • TextMate - completion using an external file or file contained in project?

    - by Neil Baldwin
    Does anyone know how to get TextMate to search an external file (or even the files contained in a TextMate "project") with which to perform word completion? I'm coding some stuff on the C64 (using TextMate to write the code) and I have an external file containing labels for all of the hardware registers/kernal routines e.g VIC2InteruptStatus = $D019 It would be really handy to be able to type, say, 'VIC2I' then press the key for word completion and have TextMate find matches in the external library file. Rather than how I'm having to do it at the moment by opening the library file and copy-paste the register names into my code.

    Read the article

  • Is it good idea to require to commit only working code?

    - by Astronavigator
    Sometimes I hear people saying something like "All committed code must be working". In some articles people even write descriptions how to create svn or git hooks that compile and test code before commit. In my company we usually create one branch for a feature, and one programmer usually works in this branch. I often (1 per 100, I think and as I think with good reason) do non-compilable commits. It seems to me that requirement of "always compilable/stable" commits conflicts with the idea of frequent commits. A programmer would rather make one commit in a week than test the whole project's stability/compilability ten times a day. For only compilable code I use tags and some selected branches (trunk etc). I see these reasons to commit not fully working or not compilable code: If I develop a new feature, it is hard to make it work writing a few lines of code. If I am editing a feature, it is again sometimes hard to keep code working every time. If I am changing some function's prototype or interface, I would also make hundreds of changes, not mechanical changes, but intellectual. Sometimes one of them could cause me to carry out hundreds of commits (but if I want all commits to be stable I should commit 1 time instead of 100). In all these cases to make stable commits I would make commits containing many-many-many changes and it will be very-very-very hard to find out "What happened in this commit?". Another aspect of this problem is that compiling code gives no guarantee of proper working. So is it good idea to require every commit to be stable/compilable? Does it depends on branching model or CVS? In your company, is it forbidden to make non compilable commits? Is it (and why) a bad idea to use only selected branches (including trunk) and tags for stable versions?

    Read the article

  • What is Finder doing when it has a spinner in the bottom-right?

    - by rspeicher
    Whenever I open a folder on a remote network share that hasn't been opened since I last booted, Finder displays this animated spinner in the bottom-right (see picture below) for about 30 seconds and won't display any of the folder's contents until it's done doing whatever it's doing. This makes browsing the share painfully slow for seemingly no reason. Why's it doing this? I should note that I've disabled .DS_Store files littering network shares using defaults write com.apple.desktopservices DSDontWriteNetworkStores true, so maybe that's why. I'm kind of hoping it's something else, though.

    Read the article

  • Windows command line built-in compression/extraction tool?

    - by Will Marcouiller
    I need to write a batch file to unzip files to their current folder from a given root folder. Folder 0 |----- Folder 1 | |----- File1.zip | |----- File2.zip | |----- File3.zip | |----- Folder 2 | |----- File4.zip | |----- Folder 3 |----- File5.zip |----- FileN.zip So, I wish that my batch file is launched like so: ocd.bat /d="Folder 0" Then, make it iterate from within the batch file through all of the subfolders to unzip the files exactly where the .zip files are located. So here's my question: Does the Windows (from XP at least) have a command line for its embedded zip tool? Otherwise, shall I stick to another third-party util?

    Read the article

  • Customers Deploying Sun Oracle Database Machine

    - by kimberly.billings
    Philippine Savings Bank (PS Bank) recently deployed the Sun Oracle Database Machine to underpin its enterprise-wide analytics platform. Now, the response times for queries and requests that used to take from three hours to several days is completed in less than one minute with near real-time updates. Read the press release. EFU General Insurance also announced this week that they have deployed the Sun Oracle Database Machine. With Oracle, EFU will be able to open more sales channels via the Web and facilitate integration with other companies. As a result, more quality services can be offered to its customers via the Web because of the more agile and reliable IT infrastructure. In addition, a centralized IT environment will offer the EFU management a real time view of key information, enabling EFU to analyze business trends and make timely decisions. Read the press release. Let us know about your Sun Oracle Database deployment! var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Windows 2008 VPS always crashes when out of disk space

    - by Pickels
    Hello, I am renting a Windows server 2008 dc SP2 VPS for hosting my Asp.Net projects. Now for the second time this month my VPS ran out of disk space. The first time it was a log file that got to big and yesterday it was my mistake for uploading a website without noticing the lack of space on my VPS. Now the side effect this has is that my VPS corrupts some files when trying to write them. Last time it was Plesk that stopped working yesterday it was IIS. So I was wondering is this normal behavior? I called my service provider to ask if they could restore a back-up and to ask if this is normal and they ensured me it was. I am not trying to blame them and I know it's mostly my fault for not monitoring my VPS better or for not setting better defaults.

    Read the article

  • Concurrent modification during backup: rsync vs dump vs tar vs ?

    - by pehrs
    I have a Linux log server where multiple applications write data. Data is written in bursts, and in a lot of different files. I need to make a backup of this mess, preferably preserving as much coherence between the file versions as possible and avoiding getting truncated files. Total amount of data on the server is about 100Gb. What I really would want (but can't) is to shut-down, backup the system cold and then start it up again. What kind of guarantees against concurrent modification does the various backup tools give? When do they "freeze" the file versions? I am looking at rsync, dump and tar at the moment, but I am open for other (open source) alternatives. Changing the application or blocking writing for backups is sadly not an option. System is not running LVM (yet), but I have considered that for rebuilding the system and then snapshots.

    Read the article

  • copy/paste on MacOS gets 'Stuck'

    - by bmargulies
    SnowLeopard. I commonly use copy in a terminal window to grab stuff I have to paste. A few times in the last few days, the clipboard has gotten 'stuck' on some old text. I do the copy gesture (with keyboard or Edit menu) in the terminal window. I paste somewhere else, and get something other than what I just copied. If all else fails, I can always tell the terminal to write it's content to a file and use that. But I'm wondering if there's something I'm doing to get into this pickle and if there's any way out.

    Read the article

  • Configuring htaccess to show authentication prompt only for subdomain

    - by Philipp Lenssen
    How do I write the htaccess so that it will only require authentication when on admin.example.com, but not on www.example.com (like by using some if-else clause)? Background: I have a site running in two modes: The admin mode should be reached at something like admin.example.com, whereas the normal mode would be www.example.com -- but both should point to the same directory & scripts within them (the scripts then turn on certain editing features by checking if the script is accessed from the admin subdomain). Edit: I can now see this has been asked and answered at StackOverflow, though I can't get the top answer to work for me...

    Read the article

  • how can I use vim as a wordprocess with soft wordwrap and columns only 40?

    - by Angela
    I have been trying to use vim as a wordprocessor to do most of my initial drafting and then open in abiword or open office to do my final printing. Problem has been, even with :set linebreak and :set wrap, when I open it, there's a ton of line breaks. I have to go and manually remove them. What I want is to be able to write at 40 columns wrap around softly so that when I open it up in Open Office, it gets restored to whatever that column width is, preserving things like carriage return but without all those linebreaks.

    Read the article

  • How do you keep your basic skills from atrophy?

    - by kojiro
    I've been programming for about 10 years, and I've started to migrate to more of a project management position. I still do coding, but less often now. One of the things that I think is holding me back in my career is that I can't "let go". I think I fear letting hard-won programming skills atrophy while I sit in meetings and annotate requirements. (Not to mention I don't trust people to write requirements who don't understand the code.) I can't just read books and magazines about coding. I'm involved in some open source projects in my free time, and stackoverflow and friends help a bit, because I get the opportunity to help people solve their programming problems without micromanaging, but neither of these are terribly structured, so it's tempting to work first on the problems I can solve easily. I guess what I'd like to find is a structured set of exercises (don't care what language or environment) that… …I can do periodically …has some kind of time requirement so I can tell if I've been goofing off …has some kind of scoring so I can tell if I'm making mistakes Is there such a thing? What would you do to keep your skills fresh?

    Read the article

  • How do i enable deflate on apache2? debian

    - by acidzombie24
    I looked at the other answers and those solutions did not help. I am completely confused how to do this. I know almost nothing about apache nor how to use it and nearly all the article say write this but doesnt tell me where. So far i have done this # a2enmod deflate Module deflate already enabled Then i tried writing this in httpd.conf AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css Then i wrote a longer version which involved deflate.c. After those didnt work i deleted them all and wrote the AddOutputFilterByType line in apache2/sites-enabled/000-default inside of I used this to test http://www.whatsmyip.org/http_compression/ everytime it says no compression. I used another site but i could never compress. Also i restart the server everytime i save a file so what could be the problem and how do i enable this properly?

    Read the article

  • Debian pure-ftpd, Restrict access

    - by durduvakis
    I am running Debian Wheezy, with ISPConfig 3, plus ModSecurity and I would like to restrict access to ftp to specific IP(s) globally (not to specific ftp users only), that can be either 127.0.0.1 or one I would manually add later. I would also like to completely disable ftp access from the web, but allow only from ftp-client software (if that is possible). The idea of closing firewall ports is not what I want. I know I can do this setting some firewall rule though, but that is not what I currently need. I have managed to do this for example on phpmyadmin inside it's .conf file, but unfortunately I cannot find any configuration to alter for pure-ftpd in my system. Restricting web-ftp access maybe possible by adding some rule in apache2 conf, but I am not sure how to write such a rule. Thanks to everyone that can help cheers

    Read the article

  • How can I pass an external instance to the constructor of an object that's being created using the default XNA XML content loader?

    - by Michael
    I'm trying to understand how to use the XNA XML content importer to instantiate non-trivial objects that are more than a collection of basic properties (e.g., a class that inherits from DrawableGameObject or GameObject and requires other things to be passed into its constructor). Is it possible to pass existing external instances (e.g., an instance of the current Game) to the constructor of an object that's being created using the default XNA XML content loader? For example, imagine that I have the following class, inheriting from DrawableGameComponent: public class Character : DrawableGameComponent { public string Name { get; set; } public Character(Game game) : base(game) { } public override void Update(GameTime gameTime) { } public override void Draw(GameTime gameTime) { } } If I had a simple class that did not need other parameters in its constructor (i.e., the Game instance), then I could simply use this XML: <XnaContent> <Asset Type="MyNamespace.Character"> <Name>John Doe</Name> </Asset> </XnaContent> ...and then create an instance of Character using this code: var character = Content.Load<Character>("MyXmlAssetName"); But that won't work because I need to pass the need to pass the Game into the constructor. What's the best way to handle this situation? Is there a way to pass in things like the current Game using the default XNA XML content loader? Do I need to write my own XML loader? (If so, how?) Is there a better object-oriented design that I should be using for my classes? Note: Although I used Game in this example, I'm really just asking how to pass any type of existing instance to my constructors. (For example, I'm using the Farseer Physics Engine, and some of my classes also need a reference to the Farseer World object too.) Thanks in advance.

    Read the article

  • maintaining redirects in nginx from an external source

    - by Sascha
    I am in the situation to give our marketing department the opportunity to maintain their redirects by their own. Until now, they passed the information to the IT department and we maintained it for them in nginx.conf. Some of these guys are quite familiar with redirections in IIS or even in apache but it is no option to give them direct access to the nginx configuration. I see, that there is no nginx support for .htaccess files which I could give access to and I would also prefer not to grant write access to an conf-file that nginx includes. I expect, that our marketing will break our nginx setup within hours... Is there a secure possibility without giving them access our the heart of our load balancer?

    Read the article

  • BizTalk 2009 - Error when Testing Map with Flat File Source Schema

    - by StuartBrierley
    I have recently been creating some flat file schemas using the BizTalk Server 2009 Flat File Schema Wizard.  I have then been mapping these flat file schemas to a "normal" xml schema format. I have not previsouly had any cause to map flat files and ran into some trouble when testing the first of these flat file maps; with an instance of the flat file as the source it threw an XSL transform error: Test Map.btm: error btm1050: XSL transform error: Unable to write output instance to the following <file:///C:\Documents and Settings\sbrierley\Local Settings\Temp\_MapData\Test Mapping\Test Map_output.xml>. Data at the root level is invalid. Line 1, position 1. Due to the complexity of the map in question I decided to created a small test map using the same source and destination schemas to see if I could pinpoint the problem.  Although the source message instance vaildated correctly against the flat file schema, when I then tested this simplified map I got the same error. After a time of fruitless head scratching and some serious google time I figured out what the problem was. Looking at the map properties I noticed that I had the test map input set to "XML" - for a flat file instance this should be set to "native".

    Read the article

  • Best way to find the computer a user last logged on from?

    - by Garrett
    I am hoping that somewhere in Active Directory the "last logged on from [computer]" is written/stored, or there is a log I can parse out? The purpose of wanting to know the last PC logged on from is for offering remote support over the network - our users move around pretty infrequently, but I'd like to know that whatever I'm consulting was updating that morning (when they logged in, presumably) at minimum. I'm also considering login scripts that write the user and computer names to a known location I can reference, but some of our users don't like to logout for 15 days at a time. If there is an elegant solution that uses login scripts, definitely mention it - but if it happens to work for merely unlocking the station, that would be even better!

    Read the article

  • Is Software Raid1 Using mdadm with a Local Hard Disk and GNDB Possible?

    - by Travis
    I have multiple webservers which use many small files to created dynamic web pages. Caching the web pages isn't an option. The webserver also performs writes so I need a synchronous filesystem. I'm looking to maximise performance as it's my understanding that small files is the weakness (to varying degreess) of a cluster filesystem over ethernet. Currently I'm using Centos 5.5, 64 bit. Since it's only about 300MB of data, I'm looking at mdadm using RAID-1 with the GNBD and a local hard disk using the "--write-mostly" option so the reads are done using the local hard disk. Is this possible? If so, is there any advantage to making it a tmpfs disk instead of a local hard disk? Or will the files on the local hard disk just get cached in RAM anyway so I won't see a performance gain by using tmpfs, assuming there's enough RAM available?

    Read the article

  • SSD install - what do I need to watch out for when reconfiguring SATA ports?

    - by tim11g
    I installed a Samsung 840 SSD in a Windows 7 machine. It seems to be working fine, but I'm not seeing the expected performance. The AS SSD benchmark gives 76 for read and 138 for write. At the upper left of the benchmark it says "pciide - BAD" and "31K - BAD". I'm assuming the "pciide BAD" means the motherboard (Gigabyte GA-P35-DS4) is configured as IDE emulation and needs to change to native SATA. I don't know what the "31K" refers to. The bios settings look like this: I saw this article that indicates that changing the SATA mode of the boot drive can cause problems (Blue Screen): Error message occurs after you change the SATA mode of the boot drive What is the correct procedure to change the SATA Mode without causing a system failure? Apply the registry change from the MSFT article above first, then reboot and change the SATA mode? Will the SATA mode change in the BIOS affect other drives?

    Read the article

  • Corrupt Windows 7, Chkdsk finds errors continuously after multiple tries

    - by mj_
    My Windows 7 instance is all corrupt on my Dell XPS laptop (my kid hibernated it as it was starting). I got the Windows 7 installation disc and I try to repair the installation. The repair goes through and it says done but it doesn't know if it repaired successfully. It just says reboot. I can also open a command line and I run chkdsk /F manually in there. What's strange is that I've run it 20 times and chkdsk claims that it fixes stuff every time. Eventually it stops and I reboot. Much to my chagrin, my machine is still broken. Chkdsk gives an error at the end that it can't write a log file. I've also run a full diagnostic using the Dell diagnostic tools. The tools say that everything is okay. Any suggestions regarding what I should do next? Any help appreciated. mj

    Read the article

  • fdisk -l only displays boot partition

    - by Franklin
    I have a SAN, and it's able to read and write to the 50TB RAID just fine, but when I run fdisk -l it only lists the boot partition of the SAN server, and doesn't display anything about the other partitions on the RAID. I've also tried using parted -l with the same result. Now when I type mount it shows that the partitions are mounted just fine. I've never seen this happen. The box is running Openfiler 2.3 (I know it's old, we're in the process of upgrading all our old equipment). We have another SAN that's configured almost identically, and it's able to display the partition info with either of the two commands I mentioned above.

    Read the article

  • Space (and pipe sign) works on occasion only

    - by Timo Riikonen
    I have an issue that when I try to write pipe sign "|" and space after that, I sometimes get wrong type of space (\240) and my command fails. This issue persists on different shells. How could I fix this? I am using Finnish keyboard layout. timo@timo-i7-ubuntu:~$ ps -ef | grep ruby timo 7169 2633 0 12:12 pts/2 00:00:00 ruby1.9.1 /usr/local/bin/rails new admin4 timo 8736 26515 0 14:22 pts/4 00:00:00 grep --color=auto ruby timo@timo-i7-ubuntu:~$ ps -ef | grep ruby No command ' grep' found, did you mean: Command 'igrep' from package 'openimageio-tools' (universe) Command 'dgrep' from package 'debian-goodies' (main) Command 'rgrep' from package 'grep' (main) Command 'zgrep' from package 'gzip' (main) Command 'zgrep' from package 'zutils' (universe) Command 'sgrep' from package 'sgrep' (universe) Command 'lgrep' from package 'lv' (universe) Command 'egrep' from package 'grep' (main) Command 'ngrep' from package 'ngrep' (universe) Command 'grep' from package 'grep' (main) Command 'agrep' from package 'agrep' (multiverse) Command 'pgrep' from package 'procps' (main) Command 'xgrep' from package 'xgrep' (universe) Command 'vgrep' from package 'atfs' (universe) Command 'fgrep' from package 'grep' (main)  grep: command not found timo@timo-i7-ubuntu:~$ cat pipecom ps -ef | grep rails timo@timo-i7-ubuntu:~$ cat pipecom2 ps -ef | grep rails timo@timo-i7-ubuntu:~$ ./pipecom timo 7169 2633 0 12:12 pts/2 00:00:00 ruby1.9.1 /usr/local/bin/rails new admin4 timo 8777 8775 0 14:26 pts/4 00:00:00 grep rails timo@timo-i7-ubuntu:~$ ./pipecom2 ./pipecom2: line 1: $'\302\240grep': command not found timo@timo-i7-ubuntu:~$ diff -w pipecom pipecom2 1c1 < ps -ef | grep rails --- > ps -ef | grep rails

    Read the article

< Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >