Search Results

Search found 35708 results on 1429 pages for 'default copy constructor'.

Page 315/1429 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • Multiple Copies of Windows Calculator

    - by Brian Boatright
    Just did a clean install of Win7 x64. I have a Microsoft Ergo Keyboard 4000 and use the calculator key a lot. Previously I could hit it and get multiple copies of calculator to popup. Now it will only show one copy of calculator. I tried adding a shortcut to the calculator app but it has the same limitation. However if I click the calculator icon it will open a new one each time. How can I fix this so each time I press the calculator key it will open a new copy?

    Read the article

  • VPN messes up DNS resolution

    - by user124114
    After connecting with the Kerio VPN client (OS X Leopard) to a server, the internet (~web browsing) stopped working for the client. After poking around, the issue seems to be bad DNS server (i.e., entering IPs directly works). After disconnecting from the VPN, the invalid DNS server disappears from scutil --dns and all's well again. Now, I don't understand why OS X on the client even changes the DNS settings -- internet should be routed through a different interface, through the default gateway, not through the VPN. Questions: By what mechanism does connecting the VPN client change the "default" DNS server? How can I stop the VPN client from changing routing/DNS rules? Where is this stuff stored/modified? Before VPN: $ scutil --dns DNS configuration resolver #1 nameserver[0] : 10.66.77.1 # <---- default gateway = home router; all good order : 200000 resolver #2 domain : local options : mdns timeout : 2 order : 300000 ... VPN connected: $ scutil --dns DNS configuration resolver #1 nameserver[0] : 192.168.1.1 # <--- rubbish nameserver[1] : 192.168.2.1 order : 200000 resolver #2 domain : local options : mdns timeout : 2 order : 300000 ... The VPN doesn't appear among $ networksetup -listallnetworkservices.

    Read the article

  • opening Dbf files in oracle 10g

    - by nagaraju
    This nagaraju,from India,Hyderabad. I have installed oracle 10g trail version in my system(E drive),created one database with my name(database:-nagaraju),in that created tables, prodecures ,functions ,sequences etc for my project. Due to some sudden problem,i formatted my machine C drive,now iam not ablle to open my database, i need all procedures ,tables which i created in that. Now I newly installed oracle10g again in another folder,how can i copy my old database into my inew installation database. Or can i copy the script of procedures so that ican run in new database. I have all data in Oradata folder,like DBF files etc. Could you please help me, how to do that?

    Read the article

  • How to Mirror or Clone a Spanned Volume in Windows 2008

    - by Matt
    I have a spanned volume (3x6+ TB disks spanned to one 20+ TB volume) that I need to mirror or clone to a new 20+ TB (unspanned) volume. Once mirrored or cloned I'm going to destroy the original volume and reuse the storage elsewhere. Windows 2008 will not allow me to mirror it because the original is a spanned volume. I cannot simply copy the data, because there are sparse files on the volume. So the OS thinks there is 150+ TB used on the disk when there really is only around 18TB used physically. When I try to use the copy command it won't run because it thinks the destination volume needs to be 150+ TB to hold it all. A conundrum, but I figure someone here has the answer. Thanks, Matt

    Read the article

  • SSH not working after Restoring Running-Config to a Replacement Cisco Router

    - by Kyle Brandt
    One of my Cisco routers died over the weekend, Cisco sent the replacement and I restored the the config using copy tftp: running-config. Everything seems to work fine but I can no longer ssh into the router (I can telnet). The connection is refused, so it isn't listening on port 22 it seems like. I had previously backed up the config by just doing ssh router 'show run' > backup_config from my workstation. So: Is there anything wrong with my method of backup vs copy running-config tftp:? I know I haven't given any debug information, but is there something typical I need to do to get ssh working?

    Read the article

  • How do I back up my Windows partition from an Ubuntu live CD?

    - by lalli
    My Windows partition (C:) is corrupt. I'm booting up from an Ubuntu live CD and trying to copy all the files from C: to my external drive, but the system expands all of the links, producing a projected copy size of 1.8TB (my external drive is just 1TB, and the data in c: is around 700MB). Then I looked at dd and other backup utilities. Anything I looked into, I couldn't figure out whether or not the image would be readable in Windows through any other app. Has anyone else tried to back up data from a corrupted Windows installation using Ubuntu?

    Read the article

  • How can I install a custom (patched) PECL extension?

    - by JKS
    I'm trying to use the htscanner PECL extension on my CentOS 5/PHP 5.2.6 machine, but there's a bug in the latest version where a newline character is added to the end of every php_value directive. This behavior causes my include_path and error_log values not to work. The bug and the patch are documented on the PECL site: http://pecl.php.net/bugs/bug.php?id=16891 I've downloaded the latest version, applied the patch, and re-compressed the package — but I can't get the PECL installer to accept it — or any local package, for that matter. I've tried every variation of the pecl install syntax that I can think of, and the only times I'm able to get it to work, it downloads an online copy first and ignores the local copy. Can anyone recommend a method for installing a PECL extension from a local file? Thanks for your consideration.

    Read the article

  • Get a file from a load balanced server in Windows Server

    - by Leandro
    I've a load balanced server on production environment for my application. The server is on Windows Server 2008 R2. I'm running a web application that creates and save a file into a folder on the web path. So I need to create a job that copy this file into another server. The main idea is that a file watcher checks for the file and then copy it instantly. But how can I know in what server it's the file? Please avoid "why you don't" answers to get a directly answer, if it's someone.

    Read the article

  • keymapping when ssh-ing from mac to linux

    - by Yair
    I'm using Lion to ssh -X to a linux machine and work on some code thats located on it. I open up an editor on the remote machine (usually matlab) and program on it. My problem is that in the linux there is no concept of the command key. So if I want to copy some text from a local window to the editor that runs on the remote, I need to to command-c to copy, and then control-v to paste. This obviously drives me nuts. I was wondering if there is a way to change the keymapping such that the command key will be recognized as a control key on the remote processes. Or is this something I need to change on my local (mac) X configuration?

    Read the article

  • SharePoint 2010 Enterprise wiki - [New page] missing

    - by icelava
    I am trying to ramp up knowledge on SharePoint deployment and usage (never did before), due to a direction to use SharePoint 2010 as a repository platform (wiki format) for our customer's infrastructure documentation. In my test virtual server, a new site of Enterprise wiki template was setup. Went into Site Actions Manage Site Features to activate Wiki Page Home Page. The default sub-web then went from /Pages to /SitePages and looks like the default Team template. The odd thing is the Site Actions is missing the New Page option. My colleague does not understand why this is the case, as it ought to be there. The original /Pages sub-web does have the option. What conditions are in play that influences the appearance of that option? UPDATE Another phenomenon observed is in the Site Actions View All Site Content view, the wiki document libraries listed in the grid will have their hyperlink (e.g. "Site Pages") lead straight to the direct default page. It would not show its own table listing of pages under that document library, unlike the original Pages document library, which expectedly show up as a listing. I wonder if this hints to any problems.

    Read the article

  • Log Files from bash script output

    - by neildeadman
    I have a script that runs (this works fine). I'd like to produce logfiles from its output and still show it on screen. I have this command that creates three files from this blog: ((./fk.sh 2>&1 1>&3 | tee errors.log) 3>&1 1>&2 | tee output.log) 2>&1 | tee final.log This does exactly what I want it to. My only issue is that I create files in my script and copy them somewhere, and I'd like to copy these logfiles there too, which I can't do whilst this script is running. I also wanted to make it easier for any user to run my script, so I created another script to run this script. According to this post (see last post) I can put a . before the script name and I can use variables assigned in my called script from the first script if I use them in the first. It doesn't seem to work though and I can't figure out why or find alternative methods. Can anyone help?

    Read the article

  • How can I get keyboard shortcuts for certain characters listed in character map that don't have an ALT equivalent listed?

    - by Kat
    Does anyone know how to get a complete listing of character map equivalents? For example, look in Windows character map under Arial for ¼ . It says you can type ALT+0188 . But some things do not have an Alt equivalent listed. For example ? only gives its unicode of U+ 1254 and no "Alt number". Obviously you can just copy and paste, but is there a way to find an Alt equivalent for that and other characters so one doesn't need to copy and paste each time? Or any other workaround suggestions? Thanks!

    Read the article

  • Anyway to backup nginx before recompiling

    - by JM4
    I am looking to install the HttpGeoipModule for NGINX but learning I have to recompile the entire thing from source in order to do so. I have a new Media Temple DV 4.0 server and that comes with nginx v 1.3.0 stock and have never had to recompile from source before and a bit nervous to make changes without being able to revert to a previous state in the event something messes up (that and the fact it is affecting a live server so no idea what downtime is). My plan was to copy all the existing modules used (nginx -V to list them all and copy the modules already compiled). Then rebuild from source with the copied info above and including the ./configure --with-http_geoip_module reference. Is is possible to backup the existing nginx configuration in the event something goes wrong?

    Read the article

  • How can I fix Problems with interlaced video jerking/flicking when playedback on DVD players? (Mixin

    - by Simon P Stevens
    I'm trying to make a DVD and the final DVD jerks when played on standalone DVD players. It seems to play fine on PCs. I think the problem may be to do with interlacing settings when rendering the final output, but I'll outline the whole editing process I have followed in case I've made a mistake somewhere else. Most of the footage comes from a sony handy cam (one of those mini DVD ones) so isn't great quality. It was set to "high quality" (haha) and 16:9 aspect ratio when it was recorded. I copy the files directly from the mini DVDs onto the hard drive and import them into Cinelerra. In Cinelerra I set the format to 25fps, 720x576, RGBA-8bit, 16:9, interlaced bottom fields first. When I've finished the editing, I add a Fields to frames effect (set to bottom first) to each video track. I render to audio and video separately: Audio: AC3, 128kbps Video: YUV4MPEG steam, video pipe settings: ffmpeg -f yuv4mpegpipe -i - -y -target dvd -flags +ilme+ildct mpeg2video % Cinelerra often crashes during the rendering, so I set it to generate a new video file at each label, and combine them using cat when I've got a sucesful render of each one. Once I've combined them, I use mencoder to re-index them: mencoder -forceidx -oac copy -ovc copy merged.m2v -o mergedReIndexed.m2v I combine the audio and video files using ffmpeg: ffmpeg -i AudioFile.ac3 -i VideoFile.m2v -target dvd -flags +ilme+ildct FinalMovie.mpg Then I build the menus with spumux and I create the DVD file system with dvdauthor, and finally I write it do a dvd-r like this: nice -n -20 growisofs -dvd-compat -speed=2 -Z /dev/dvd -dvd-video -V VIDEO ./ && eject /dev/dvd Originally, when I did it the DVD flickered badly, so as suggested in a guide I added the fields to frames effect in cinelerra. Now it doesn't "flicker", but has become "jerky" when there is lots of motion, particularly when the camera is moving, so the whole background moves. This is what I've tried so far: Removed "mpeg2video" from cinelerra video render pipe. Removed +ilme from render pipe. Removed +ildct from render pipe. Removed +ilme from render audio/video rejoin command. Removed +ildct from render audio/video rejoin command. Added -alt to render pipe. Added -alt to render audio/video rejoin command. Tried with and without the frames to fields effect in Cinelerra. and various combinations of the above. I've also tried this: change the Cinelerra fps to 50, use fields to frames (instead of frames to fields), render to an intermediate QTforlinux jpeg video stream, re-importing that back into Cinelerra, adding a frames to fields effect and then rendering that output as normal (@25fps), and I still have the same problem. Has anyone experienced this "jerking" playback before? Can anyone give any suggestions on how to fix it? (Like I say, it plays back fine on a PC, but not on any of the standalone players I've tried)

    Read the article

  • Linux Live CD for old computer

    - by Joel Coehoorn
    I have a pentium II (that's right, pentium II) with a scant 200MB of ram. This was a high-end workstation in it's day. The machine currently runs dos on a raid array, and I need to pull some data from it. I figure my best chance at this is to use a linux live cd to copy the data to one of our active directory network shares (there is a network card in the machine). Unfortunately, my linux skills are abysmal, so I'm not sure where to get started: Where should I look to find a linux cd that will run well on such an old system Since I'm likely gonna need to be command-line only, what do I need to do to configure the network card and mount the network share via the command line? Bonus points: exact syntax needed to copy and convert the entire volume for use in VMware server 2.0, but really just copying all the data should be enough.

    Read the article

  • Limit of dvd rom

    - by user23950
    I have lite on dvd rom. And I'm going to copy lots of files from maybe 40-70 dvd's. And I'm using this dvd rom for about 4-7 months now. And I have also burned lots of dvd's. and I've copied the 2nd dvd. But the rom is making sound. A sound that I do not hear frequently. Does it depend on the dvd's that I'm reading or my dvd rom is getting old. How many dvd's do you think the rom can copy without threatening its health

    Read the article

  • What are these CPU cache settings? Snoop Filter, ACL prefetch, HW prefetch

    - by eater
    I was in my BIOS setup turning on VT-x support today and saw these other settings. A little googling indicates that they each seem to turn on some sort of optimization to do with the CPU's L2 cache. They were all turned off by default. The processor in question is an Intel Xeon quad-core 3.4GHz (X5492). My OS is Linux 2.6.35.10-74.fc14.x86_64 #1 SMP Thu Dec 23 16:04:50 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux. I have 4GB of RAM if that matters. Here's what the BIOS manufacturer has to say: Snoop Filter Enabling the snoop filter typically improves performance by reducing snoop traffic on the frontside bus in dual processor configurations. Well I like the sound of improved performance. Why would the BIOS have this off by default? Or by dual processor do they not mean multi-core? Regardless, is there a downside if this is on? ACL Prefetch When enabled, the Adjacent Cache Line Prefetcher fetches both cache lines that comprise a cache line pair when it determines required data is not currently in its cache. When disabled, the processor will only fetch the cache line required by the processor. HW Prefetch Fetches an extra line of data into L2 from external memory. Both of these sound like optimizations that have some drawbacks. What are the reasons to turn them on? What are the reasons to leave them off. Why is the default off?

    Read the article

  • mirroring linux server to external usb harddrive

    - by DuPie
    My google-fu must be sucking. i havent been able to find a good solution for the following: numerous Linux server on commodity hardware Trying to do a recovery mirror copy to external harddrives External harddrives are smaller than source harddrives, but larger than data External drives are connected via usb2 (slow) Servers range from 20GB of data to 400GB of data Servers are remote, so hands on access is a pain need to copy boot files. empty external drives currently Basicly, looking for a way to do use a ghosting solution from INSIDE a running linux server to an external harddrive, without booting a cd etc. the rsync/cpio solutions i've looked at dont work great with grub/dev/proc etc. I understand that since the system isnt offline, it wont be a "mirror" image as files change, but thats ok. Are there any free/commercial products that would work?

    Read the article

  • rsync - How to exclude one .htaccess but not all of them

    - by Cory Gagliardi
    I have an rsync command for copying my files from dev to production. I don't want to copy the .htaccess file that's in the root of the HTML directory but, I do want to copy the few .htaccess files that are in its sub directories. I'm using the argument --exclude .htaccess which is stopping all of the files from getting copied. The other arguments I'm including are -a --recursive --times --perms. Is it possible to configure rsync to do this? Edit: Here is my full command: rsync -a --recursive --times --perms \ --exclude prop_images --exclude tracking --exclude vtours \ --exclude .htaccess --exclude .htaccess_backup --exclude "*~" \ /home/user/dev_html/* /home/user/public_html/

    Read the article

  • What is the easiest way to get perfmon counter names into a text file?

    - by Bill Paetzke
    I'd like to create a settings file for my logman command. I expect to have lots of perfmon counters. Is there any easy way to get all the perfmon counters' exact text anywhere? The only thing I thought of was to create a Perfmon Counter Log through the GUI and then export the list of selected counters--but I don't see an export option! I guess I could manually copy what I see on the screen, but that seems inefficient. I'm going to be dealing with tens of counters. Maybe there is a list somewhere? That'd be easier to copy and paste from.

    Read the article

  • publish over ssh jenkins

    - by Pravish
    I have been working on a small project where need to copy files from one windows machine to another through jenkins in a secure way. I have heard about publish over ssh plugin in Jenkins. I tried to set that up but no luck. Did anybody do that and help me in that? Even to resolve it, i had installed openssh with cygwin on both the windows server and tried to copy the files (through SCP) or just connect both hosts (through ssh) in linux way through cygwin but always gets error of - ssh 3612 tty_list::allocate_tty: No tty allocated or scp 2680 tty_list::allocate_tty: No tty allocated Please help!

    Read the article

  • scp vs netatalk, samba, and/or vsftpd with External USB drive

    - by KitsuneYMG
    I set up a ubuntu server machine to share an ext2 formatted external usb drive. When attempting to copy a single 275MB files from said device through netatalk, I get estimated download rates at around 45 min. With samba and ftp (using vsftpd) I get 1+ hours! Using scp to copy the file results in complete download within 5 minutes. Another option, ssh+cp from external device to ~ and then using netatalk to grab it from there results in a total time of arounf 7 minutes. Does anyone have a clue what is misconfigured? Assuming that nothing is, is there any fs/pseudo-fs that would use the internal hdd as an intermediate location/onion-layer for the external hdd (for reads only)? Details: AppleVolumes.default: /mnt/ext USB allow:username cnidscheme:cdb options:usedots,upriv

    Read the article

  • How to change windows bootloader target folder

    - by ST3
    Here is described part of windows boot process. I would like to ask if there is a way to change boot folder, I mean to use something else instead of C:\WINDOWS. And of course that something else is a copy of Windows directory. It looks like bcdedit is good for that purpose but I'm not sure how to use that. That I want is to change path, which currently is \Windows\system32\winload.exe to \Windows Copy\system32\winload.exe Another thing I have found out is registry, HKLM\BCD00000000\Objects\{df90fe29-c40d-11e2-a7bb-92410b6e649d}\Elements\12000002::Element value is \Windows\system32\winload.exe so changing this also may be promising. But I'm not sure if I should change registry value and don't know how to use bcdedit, so any related help will be appreciated.

    Read the article

  • OpenLDAP ACLs are not working

    - by Dr I
    First things first, I'm currently working with an OpenLDAP: slapd 2.4.36 on a Fedora release 19 (Schrödinger’s Cat). I've just install the openldap with yum and my configuration is the following one: ##### OpenLDAP Default configuration ##### # ##### OpenLDAP CORE CONFIGURATION ##### include /etc/openldap/schema/core.schema include /etc/openldap/schema/cosine.schema include /etc/openldap/schema/inetorgperson.schema include /etc/openldap/schema/nis.schema pidfile /var/lib/ldap/slapd.pid loglevel trace ##### Default Schema ##### database mdb directory /var/lib/ldap/ maxsize 1073741824 suffix "dc=domain,dc=tld" rootdn "cn=root,dc=domain,dc=tld" rootpw {SSHA}SECRETP@SSWORD ##### Default ACL ##### access to attrs=userpassword by self write by group.exact="cn=administrators,ou=builtin,ou=groups,dc=domain,dc=tld" write by anonymous auth by * none I launch my OpenLDAP service using: /usr/sbin/slapd -u ldap -h ldapi:/// ldap:/// -f /etc/openldap/slapd.conf As you can see it's a pretty simple ACL which aim to allow access to the userPassword attribute to a specific group read only, then to the owner read and write to anonymous requiring auth and refuse the access to everyone else. The problem is: Even using a valid user with correct password my ldapsearch ends with zero informations retrieved from the directory, plus I've got a strange response on the result line. # search result search: 2 result: 32 No such object # numResponses: 1 here is the ldapsearch request: ldapsearch -H ldap.domain.tld -W -b dc=domain,dc=tld -s sub -D cn=user,ou=service,ou=employees,ou=users,dc=domain,dc=tld I did not specify any filter as I want to check that ldapsearch is correctly printing only allowed attribute.

    Read the article

  • Do I really need Microsoft Updates?

    - by Tony Wong
    When I install a fresh copy of Windows XP Home (I bought it from the store.. not a copy), my PC rocks like lightening speed. But when I start installing all the updates, patches & less .NET 4.0 client (as the .NET 4.0 Client seems to bring machine to slow crawl). The PC starts to slow down.. like there are more resources to watch or something is happening in the background. So could I not get away with an awesome virus protector and an awesome firewall set-up and avoid all the patches? The machine I have is a quad 4, 4 GB RAM and 2.3 GHz process. Tons of room and the machine can run several applications at one time.. but when the updates happen.. it's s-l-o-w!

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >