Search Results

Search found 23555 results on 943 pages for 'command timeout'.

Page 593/943 | < Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >

  • How can I repair the Windows 8 EFI Bootloader?

    - by Superuser
    I installed Windows 7 and Windows 8 in EFI mode on a hard disk some days ago. Today, the bootloader got missing/corrupted. I currently have the Windows 8 installer on a flash drive and tried using the Automatic Repair option to repair the bootloader but it didn't do anything. The Startup Repair option is also missing in the Windows 8 installer. How I can repair/recreate the EFI bootloader from the Command Prompt? BCDEDIT returns the following message: The requested system device cannot be found.

    Read the article

  • bash alias doesn't carry over with sudo

    - by agent154
    I'm curious if there's a way to get my .bash_profile to work when I sudo a program. For example, I have it set to alias emerge='emerge -av' so that I can install software, and it will always ask me if I want to proceed before downloading and installing. However I just noticed when I sudo emerge foo, it defaults to just the plain command emerge foo instead of emerge -av foo. Only thing that comes to mind to fix this is to also put the alias in root's .bash_profile, but I don't want to have to resort to that since I will always have to make changes in two places when I want to add stuff to my own profile. Is there another way around this that I'm unfamiliar with?

    Read the article

  • kernel panic with exitcode=0x00000004 and no call trace

    - by litmusconfig
    A bit of background first - I'm trying to configure a MicroBlaze Linux (big-endian version) system on a Xilinx ML506 eval board. The goal is to use the second partition of a CompactFlash card attached to the Xilinx SystemACE controller. So far, root in initramfs works and after boot, I can mount and use said partition, no problem. But if I try to use it right from the getgo with the "root=/dev/xsa2" kernel command line parameter, the system hangs with [...] Freeing unused kernel memory: 143k freed Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000004 And that's it - no regdump, no call trace, no further nothing from the serial console, even though kernel has been configured with debugging enabled. Now, I'm pretty new at this, so is there something else I should be doing to see something more informative from the kernel?

    Read the article

  • Microsoft Visual Studio Team Explorer 2010 codename “Eaglestone”

    - by HosamKamel
    Microsoft has released the beta release of Microsoft Visual Studio Team Explorer 2010 codename “Eaglestone”, the Eclipse plugin and cross-platform command line assets that were acquired from Teamprise back in November. You can download the bits here, and participate in the associated Microsoft Connect community here. Changes done in this release : All of the architectural changes in TFS 2010 has been reacted, which primarily shows up in our support for Team Project Collections but it also means that the Eclipse plug-in supports all the configurations for project portal and reporting services that are possible (including not having any configured at all) Added the enhanced work item linking and hierarchy capabilities.  You can now define typed links, query for work items based on links, and work with work item hierarchies. Added support for the new WF-based team build Have reacted to a lot of underlying changes in the source control version model with respect to how branching, merging, and renames happen. History now follows branches and merges. Branches are proper first class citizens in the source control explorer. You can check a detailed post written  by bharry here Microsoft Visual Studio Team Explorer 2010 codename “Eaglestone”

    Read the article

  • Troubleshooting Application Timeouts in SQL Server

    - by Tara Kizer
    I recently received the following email from a blog reader: "We are having an OLTP database instance, using SQL Server 2005 with little to moderate traffic (10-20 requests/min). There are also bulk imports that occur at regular intervals in this DB and the import duration ranges between 10secs to 1 min, depending on the data size. Intermittently (2-3 times in a week), we face an issue, where queries get timed out (default of 30 secs set in application). On analyzing, we found two stored procedures, having queries with multiple table joins inside them of taking a long time (5-10 mins) in getting executed, when ideally the execution duration ranges between 5-10 secs. Execution plan of the same displayed Clustered Index Scan happening instead of Clustered Index Seek. All required Indexes are found to be present and Index fragmentation is also minimal as we Rebuild Indexes regularly alongwith Updating Statistics. With no other alternate options occuring to us, we restarted SQL server and thereafter the performance was back on track. But sometimes it was still giving timeout errors for some hits and so we also restarted IIS and that stopped the problem as of now." Rather than respond directly to the blog reader, I thought it would be more interesting to share my thoughts on this issue in a blog. There are a few things that I can think of that could cause abnormal timeouts: Blocking Bad plan in cache Outdated statistics Hardware bottleneck To determine if blocking is the issue, we can easily run sp_who/sp_who2 or a query directly on sysprocesses (select * from master..sysprocesses where blocking <> 0).  If blocking is present and consistent, then you'll need to determine whether or not to kill the parent blocking process.  Killing a process will cause the transaction to rollback, so you need to proceed with caution.  Killing the parent blocking process is only a temporary solution, so you'll need to do more thorough analysis to figure out why the blocking was present.  You should look into missing indexes and perhaps consider changing the database's isolation level to READ_COMMITTED_SNAPSHOT. The blog reader mentions that the execution plan shows a clustered index scan when a clustered index seek is normal for the stored procedure.  A clustered index scan might have been chosen either because that is what is in cache already or because of out of date statistics.  The blog reader mentions that bulk imports occur at regular intervals, so outdated statistics is definitely something that could cause this issue.  The blog reader may need to update statistics after imports are done if the imports are changing a lot of data (greater than 10%).  If the statistics are good, then the query optimizer might have chosen to scan rather than seek in a previous execution because the scan was determined to be less costly due to the value of an input parameter.  If this parameter value is rare, then its execution plan in cache is what we call a bad plan.  You want the best plan in cache for the most frequent parameter values.  If a bad plan is a recurring problem on your system, then you should consider rewriting the stored procedure.  You might want to break up the code into multiple stored procedures so that each can have a different execution plan in cache. To remove a bad plan from cache, you can recompile the stored procedure.  An alternative method is to run DBCC FREEPROCACHE which drops the procedure cache.  It is better to recompile stored procedures rather than dropping the procedure cache as dropping the procedure cache affects all plans in cache rather than just the ones that were bad, so there will be a temporary performance penalty until the plans are loaded into cache again. To determine if there is a hardware bottleneck occurring such as slow I/O or high CPU utilization, you will need to run Performance Monitor on the database server.  Hopefully you already have a baseline of the server so you know what is normal and what is not.  Be on the lookout for I/O requests taking longer than 12 milliseconds and CPU utilization over 90%.  The servers that I support typically are under 30% CPU utilization, but your baseline could be higher and be within a normal range. If restarting the SQL Server service fixes the problem, then the problem was most likely due to blocking or a bad plan in the procedure cache.  Rather than restarting the SQL Server service, which causes downtime, the blog reader should instead analyze the above mentioned things.  Proceed with caution when restarting the SQL Server service as all transactions that have not completed will be rolled back at startup.  This crash recovery process could take longer than normal if there was a long-running transaction running when the service was stopped.  Until the crash recovery process is completed on the database, it is unavailable to your applications. If restarting IIS fixes the problem, then the problem might not have been inside SQL Server.  Prior to taking this step, you should do analysis of the above mentioned things. If you can think of other reasons why the blog reader is facing this issue a few times a week, I'd love to hear your thoughts via a blog comment.

    Read the article

  • BizTalk host throttling &ndash; Singleton pattern and High database size

    - by S.E.R.
    Originally posted on: http://geekswithblogs.net/SERivas/archive/2013/06/30/biztalk-host-throttling-ndash-singleton-pattern-and-high-database-size.aspxI have worked for some days around the singleton pattern (for those unfamiliar with it, read this post by Victor Fehlberg) and have come across a few very interesting posts, among which one dealt with performance issues (here, also by Victor Fehlberg). Simply put: if you have an orchestration which implements the singleton pattern, then performances will continuously decrease as the orchestration receives and consumes messages, and that behavior is more obvious when the orchestration never ends (ie : it keeps looping and never terminates or completes). As I experienced the same kind of problem (actually I was alerted by SCOM, which told me that the host was being throttled because of High database size), I thought it would be a good idea to dig a little bit a see what happens deep inside BizTalk and thus understand the reasons for this behavior. NOTE: in this article, I will focus on this High database size throttling condition. I will try and work on the other conditions in some not too distant future… Test conditions The singleton orchestration For the purpose of this study, I have created the following orchestration, which is a very basic implementation of a singleton that piles up incoming messages, then does something else when a certain timeout has been reached without receiving another message: Throttling settings I have two distinct hosts : one that hosts the receive port (basic FILE port) : Ports_ReceiveHostone that hosts the orchestration : ProcessingHost In order to emphasize the throttling mechanism, I have modified the throttling settings for each of these hosts are as follows (all other parameters are set to the default value): [Throttling thresholds] Message count in database: 500 (default value : 50000) Evolution of performance counters when submitting messages Since we are investigating the High database size throttling condition, here are the performance counter that we should take a look at (all of them are in the BizTalk:Message Agent performance object): Database sizeHigh database sizeMessage delivery throttling stateMessage publishing throttling stateMessage delivery delay (ms)Message publishing delay (ms)Message delivery throttling state durationMessage publishing throttling state duration (If you are not used to Perfmon, I strongly recommend that you start using it right now: it is a wonderful tool that allows you to open the hood and see what is going on inside BizTalk – and other systems) Database size It is quite obvious that we will start by watching the database size and high database size counters, just to see when the first reaches the configured threshold (500) and when the second rings the alarm. NOTE : During this test I submitted 600 messages, one message at a time every 10ms to see the evolution of the counters we have previously selected. It might not show very well on this screenshot, but here is what happened: From 15:46:50 to 15:47:50, the database size for the Ports_ReceiveHost host (blue line) kept growing until it reached a maximum of 504.At 15:47:50, the high database size alert fires At first I was surprised by this result: why is it the database size of the receiving host that keeps growing since it is the processing host that piles up messages? Actually, it makes total sense. This counter measures the size of the database queue that is being filled by the host, not consumed. Therefore, the high database size alert is raised on the host that fills the queue: Ports_ReceiveHost. More information is available on the Public MPWiki page. Now, looking at the Message publishing throttling state for the receiving host (green line), we can see that a throttling condition has been reached at 15:47:50: We can also see that the Message publishing delay(ms) (blue line) has begun growing slowly from this point. All of this explains why performances keep decreasing when a singleton keeps processing new messages: the database size grows and when it has exceeded the Message count in database threshold, the host is throttled and the publishing delay keeps increasing. Digging further So, what happens to the database queue then? Is it flushed some day or does it keep growing and growing indefinitely? The real question being: will the host be throttled forever because of this singleton? To answer this question, I set the Message count in database threshold to 20 (this value is very low in order not to wait for too long, otherwise I certainly would have fallen asleep in front of my screen) and I submitted 30 messages. The test was started at 18:26. At 18:56 (ie : exactly 30min later) the throttling was stopped and the database size was divided by 2. 30 min later again, the database size had dropped to almost zero: I guess I’ll have to find some documentation and do some more testing before I sort this out! My guess is that some maintenance job is at work here, though I cannot tell which one Digging even further If we take a look at the Message delivery throttling state counter for the processing host, we can see that this host was also throttled during the submission of the 600 documents: The value for the counter was 1, meaning that Message delivery incoming rate for the host instance exceeds the Message delivery outgoing rate * the specified Rate overdrive factor (percent) value. We will see this another day… :) A last word Let’s end this article with a warning: DO NOT CHANGE THE THROTTLING SETTINGS LIGHTLY! The temptation can be great to just bypass throttling by setting very high values for each parameter (or zero in some cases, which simply disables throttling). Nevertheless, always keep in mind that this mechanism is here for a very good reason: prevent your BizTalk infrastructure from exploding!! So whatever you do with those settings, do a lot of testing and benchmarking!

    Read the article

  • How is htop "Swp" calculated?

    - by Thomas
    When I run htop (on OS X 10.6.8), I see something like this : 1 [||||||| 20.0%] Tasks: 70 total, 0 running 2 [||| 7.2%] Load average: 1.11 0.79 0.64 3 [|||||||||||||||||||||||||||81.3%] Uptime: 00:30:42 4 [|| 5.8%] Mem[|||||||||||||||||||||3872/4096MB] Swp[ 0/0MB] PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 284 501 57 0 15.3G 1064M 0 S 0.0 6.5 0:01.26 /Applications/Firefox.app/Contents/MacOS/firefox -psn_0_90134 437 501 57 0 14.8G 785M 0 S 0.0 4.8 0:00.18 /Applications/Thunderbird.app/Contents/MacOS/thunderbird -psn_0_114716 428 501 63 0 12.8G 351M 0 S 1.0 2.1 0:00.51 /Applications/Firefox.app/Contents/MacOS/plugin-container.app/Contents/MacOS/ 696 501 63 0 11.7G 175M 0 S 0.0 1.1 0:00.02 /System/Library/Frameworks/QuickLook.framework/Resources/quicklookd.app/Conte 38 0 33 0 11.1G 422M 0 S 0.0 2.6 0:00.59 /System/Library/Frameworks/CoreServices.framework/Frameworks/Metadata.framewo 183 501 48 0 10.9G 137M 0 S 0.0 0.8 0:00.03 /System/Library/CoreServices/Finder.app/Contents/MacOS/Finder How can I have Processes using Gigabytes of VIRT memory and still 0MB of Swap used ?

    Read the article

  • rsync from OS X to Ubuntu failing for large (>15GB) files

    - by johnny_bgoode
    I'm trying to rsync a 15 GB file from my OSX box to a box running Ubuntu 10.04 server. rsync is transferring ~300-700Mb and then closing the connection with the following error: Read from remote host my.host.name: Connection reset by peer rsync: writefd_unbuffered failed to write 4 bytes [sender]: Broken pipe (32) rsync: connection unexpectedly closed (397214 bytes received so far) [sender] rsync error: unexplained error (code 255) at /SourceCache/rsync/rsync-40/rsync/io.c(452) [sender=2.6.9] The exact command I am executing is: rsync --progress --archive --inplace my.15GB.file.tgz my.host.name:~/ I am sure that there is enough free space on the Ubuntu box. Any ideas what could be causing the connection to drop?

    Read the article

  • How to fix “Module ndiswrapper not found"

    - by jason328
    I have Ubuntu 12.10 and whenever I run sudo modprobe ndiswrapper, I get the following error. FATAL: Module ndiswrapper not found. The command dkms status returns with... ndiswrapper, 1.57, 3.2.0-32-generic, i686: installed The make.log in ndiswrapper returns with... DKMS make.log for ndiswrapper-1.57 for kernel 3.5.0-18-generic (i686) Wed Nov 7 22:16:12 EST 2012 make -C /usr/src/linux-headers-3.5.0-18-generic M=/var/lib/dkms/ndiswrapper /1.57/build make[1]: Entering directory `/usr/src/linux-headers-3.5.0-18-generic' LD /var/lib/dkms/ndiswrapper/1.57/build/built-in.o MKEXPORT /var/lib/dkms/ndiswrapper/1.57/build/crt_exports.h MKEXPORT /var/lib/dkms/ndiswrapper/1.57/build/hal_exports.h MKEXPORT /var/lib/dkms/ndiswrapper/1.57/build/ndis_exports.h MKEXPORT /var/lib/dkms/ndiswrapper/1.57/build/ntoskernel_exports.h MKEXPORT /var/lib/dkms/ndiswrapper/1.57/build/ntoskernel_io_exports.h MKEXPORT /var/lib/dkms/ndiswrapper/1.57/build/rtl_exports.h MKEXPORT /var/lib/dkms/ndiswrapper/1.57/build/usb_exports.h CC [M] /var/lib/dkms/ndiswrapper/1.57/build/crt.o CC [M] /var/lib/dkms/ndiswrapper/1.57/build/hal.o CC [M] /var/lib/dkms/ndiswrapper/1.57/build/iw_ndis.o CC [M] /var/lib/dkms/ndiswrapper/1.57/build/loader.o CC [M] /var/lib/dkms/ndiswrapper/1.57/build/ndis.o /var/lib/dkms/ndiswrapper/1.57/build/ndis.c: In function ‘NdisGetCurrentProcessorCounts’: /var/lib/dkms/ndiswrapper/1.57/build/ndis.c:2657:24: error: ‘struct kernel_stat’ has no member named ‘cpustat’ /var/lib/dkms/ndiswrapper/1.57/build/ndis.c:2658:31: error: ‘struct kernel_stat’ has no member named ‘cpustat’ /var/lib/dkms/ndiswrapper/1.57/build/ndis.c:2659:17: error: ‘struct kernel_stat’ has no member named ‘cpustat’ make[2]: *** [/var/lib/dkms/ndiswrapper/1.57/build/ndis.o] Error 1 make[1]: *** [_module_/var/lib/dkms/ndiswrapper/1.57/build] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.5.0-18-generic' make: *** [modules] Error 2 I have installed commons, utils-1.9, dkms, source but it's still returning this error. How do I fix this?

    Read the article

  • Upgrade OpenSSL 0.9.8k to OpenSSL 1.0.1c on Ubuntu 10.04

    - by Nina
    We're currently using Ubuntu 10.04 and based on the PCI Compliance results, we're told to upgrade our OpenSSL. I attempted to do this using this reference: http://sandilands.info/sgordon/upgrade-latest-version-openssl-on-ubuntu and http://www.lunarforums.com/dedicated_web_hosting_at_lunarpages/upgrading_openssl-t35015.0.html Unfortunately, they didn't work for me. And when I attempted to remove the old version prior to installation, it looks like it broke a few thins in the system. The article from Steve Gordon seemed like it would work for me, but when I ran the openssl version command, it still read that it was the old version. I was wondering if anyone has any suggestions on what I should do. Fix: After following the steps from Steven Gordon, make sure you restart apache and / or restart your computer (I did both, but I'm sure a simple restart will fix it right up).

    Read the article

  • conversion of a VMDK image with qemu-img failed with "error while reading sector 131072: Invalid argument"

    - by Erik Sjölund
    I tried to convert a VMDK image found in a OVA file to the QCOW2 format with the qemu-img command but it failed with the error message qemu-img: error while reading sector 131072: Invalid argument user@ubuntu:/tmp$ wget ftp://ftp.sanger.ac.uk/pub/databases/Pfam/vm/PfamWeb_20120124.ova user@ubuntu:/tmp$ tar xfv PfamWeb_20120124.ova PfamWeb_20120124_2.ovf PfamWeb_20120124_2.mf PfamWeb_20120124_2-disk1.vmdk user@ubuntu:/tmp$ qemu-img convert -O qcow2 PfamWeb_20120124_2-disk1.vmdk PfamWeb_20120124_2.qcow2 qemu-img: error while reading sector 131072: Invalid argument user@ubuntu:/tmp$ qemu-img --version | grep "qemu-img version" qemu-img version 1.0, Copyright (c) 2004-2008 Fabrice Bellard user@ubuntu:/tmp$ dpkg-query -f='${Version}\n' --show qemu-utils 1.0+noroms-0ubuntu14.1 user@ubuntu:/tmp$ cat /etc/issue Ubuntu 12.04.1 LTS \n \l How do I avoid the error?

    Read the article

  • OpenGL 2 on Android: native window

    - by ThreaderSlash
    According to OGLES specification, we have the following definition: EGLSurface eglCreateWindowSurface(EGLDisplay display, EGLConfig config, NativeWindowType native_window, EGLint const * attrib_list) More details, here: http://www.khronos.org/opengles/documentation/opengles1_0/html/eglCreateWindowSurface.html And also by definition: int32_t ANativeWindow_setBuffersGeometry(ANativeWindow* window, int32_t width, int32_t height, int32_t format); More details, here: http://mobilepearls.com/labs/native-android-api I am running Android Native App on OGLES 2 and debugging it in a Samsung Nexus device. For setting up the 3D scene graph environment, the following variables are defined: struct android_app { ... ANativeWindow* window; }; android_app* mApplication; ... mApplication=&pApplication; And to initialize the App, we run the commands in the code: ANativeWindow_setBuffersGeometry(mApplication->window, 0, 0, lFormat); mSurface = eglCreateWindowSurface(mDisplay, lConfig, mApplication->window, NULL); Funny to say is that, the command ANativeWindow_setBuffersGeometry behaves as expected and works fine according to its definition, accepting all the parameters sent to it. But the eglCreateWindowSurface does no accept the parameter mApplication-window, as it should accept according to its definition. Instead, it looks for the following input: EGLNativeWindowType hWnd; mSurface = eglCreateWindowSurface(mDisplay,lConfig,hWnd,NULL); As an alternative, I considered to use instead: NativeWindowType hWnd=android_createDisplaySurface(); But debugger says: Function 'android_createDisplaySurface' could not be resolved Can someone tell if there is a way to convert mApplication-window? In a way that the data from the android_app get accepted to the window surface?

    Read the article

  • We Don’t Need No Regions

    - by João Angelo
    If your code reaches a level where you want to hide it behind regions then you have a problem that regions won’t solve. Regions are good to hide things that you don’t want to have knowledge about such as auto-generated code. Normally, when you’re developing you end up reading more code than you write it so why would you want to complicate the reading process. I, for one, would love to have that one discussion around regions where someone convinces me that they solve a problem that has no other alternative solution, but I’m still waiting. The most frequent argument I hear about regions is that they allow you to structure your code, but why don’t just structure it using classes, methods and all that other stuff that OOP is about because at the end of the day, you should be doing object oriented programming and not region oriented programming. Having said that, I do believe that sometimes is helpful to have a quick overview of a code file contents and Visual Studio allows you to do just that through the Collapse to Definitions command (CTRL + M, CTRL + O) which collapses the members of all types; if you like regions, you should try this, it is much more useful to read all the members of a type than all the regions inside a type.

    Read the article

  • Issue with gpg agent in Ubuntu 12.04 after installing gnome3 shell

    - by Jeroen
    I just did a fresh install of Ubuntu 12.04. Initially things were working. But after I installed some software, the 'gpg agent' is unresponsive. I suspect it has something to do with upgrades that I downloaded from the gnome 3 ppa. When I try to sign a package, it terminates with: gpg: problem with the agent - disabling agent use debsign: gpg error occurred! Aborting.... debuild: fatal error at line 1271: running debsign failed The GPG gui tool (called "Passwords and Keys" or seahorse) isn't starting anymore either. When I click it, it tries to start and then gives up and dies after a couple of seconds. I am not sure where to look for log files of gpg agent. The only thing that I see in /var/log is in auth.log that says: May 1 20:04:14 jeroen-ubuntu gnome-keyring-daemon[1997]: couldn't create prompt for gnupg passphrase: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.keyring.SystemPrompter was not provided by any .service files Not sure if it is related, but when I try to start seahorse from the command line, I get: jeroen@jeroen-ubuntu:~$ seahorse (seahorse:4828): GLib-GIO-ERROR **: Settings schema 'org.gnome.crypto.pgp' is not installed Edit: I fixed the seahorse GUI by manually downloading and reinstalling gnome-keyring version from precise instead of the ppa. However, I still cannot sign packages.

    Read the article

  • How do I align my partition table properly?

    - by Jorge Castro
    I am in the process of building my first RAID5 array. I've used mdadm to create the following set up: root@bondigas:~# mdadm --detail /dev/md1 /dev/md1: Version : 00.90 Creation Time : Wed Oct 20 20:00:41 2010 Raid Level : raid5 Array Size : 5860543488 (5589.05 GiB 6001.20 GB) Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Wed Oct 20 20:13:48 2010 State : clean, degraded, recovering Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 64K Rebuild Status : 1% complete UUID : f6dc829e:aa29b476:edd1ef19:85032322 (local to host bondigas) Events : 0.12 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 4 8 64 3 spare rebuilding /dev/sde While that's going I decided to format the beast with the following command: root@bondigas:~# mkfs.ext4 /dev/md1p1 mke2fs 1.41.11 (14-Mar-2010) /dev/md1p1 alignment is offset by 63488 bytes. This may result in very poor performance, (re)-partitioning suggested. Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=16 blocks, Stripe width=48 blocks 97853440 inodes, 391394047 blocks 19569702 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 11945 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 Writing inode tables: ^C 27/11945 root@bondigas:~# ^C I am unsure what to do about "/dev/md1p1 alignment is offset by 63488 bytes." and how to properly partition the disks to match so I can format it properly.

    Read the article

  • inetd / xinetd not working under cygwin

    - by Zimmy-DUB-Zongy-Zong-DUBBY
    I am trying to use xinetd (or inetd) with netcat to act as a TCP proxy. This setup works on Linux without issue. Under Cygwin, either as a service or from the a Cygwin command line, the (x)inetd fails to open netcat, with the error "no such file or directory". I have tried specifying /usr/bin/nc, /usr/bin/nc.exe, /cygdrive/d/cygwin/usr/bin/nc.exe, d:\cygwin\bin\nc.exe, and a TON of other combinations of forward flashes, backslashes, Windows paths and Cygwin paths. No matter what, I get errno 2, no such file or dir. Any ideas? I need this working ASAP. Edit: I thought it may have to do with it being in d:\cygwin (lame hardcoding?) but I tested it on a machine with cygwin on C:\, problem exists there too.

    Read the article

  • How can I create an “su” only user (no SSH or SFTP) and limit who can “su” into that account in RHEL5? [closed]

    - by Beaming Mel-Bin
    Possible Duplicate: How can I allow one user to su to another without allowing root access? We have a user account that our DBAs use (oracle). I do not want to set a password on this account and want to only allow users in the dba group to su - oracle. How can I accomplish this? I was thinking of just giving them sudo access to the su - oracle command. However, I wouldn't be surprised if there was a more polished/elegant/secure way.

    Read the article

  • Installing Ubuntu One on Ubuntu 11.10 server

    - by Yar
    I have installed "Ubuntu One" on an Ubuntu server 11.10 based on these instructions: How do I configure Ubuntu one on a 11.10 server? Everything went smooth during installation. However when I try the command: u1sdtool --start to get the server up, I get the following stack error: u1sdtool --start /usr/lib/python2.7/dist-packages/gtk-2.0/gtk/init.py:57: GtkWarning: could not open display warnings.warn(str(e), _gtk.Warning) Unhandled Error Traceback (most recent call last): dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NotSupported: Unable to autolaunch a dbus-daemon without a $DISPLAY for X11 Does anyone have a clue how to solve this issue?

    Read the article

  • How to change local user home folder on Windows 2000 and above

    - by Adi Roiban
    I was using a local account on a Windows 7 desktop that is not connected to any Active Directory. After a while it was required to rename the local account. Renaming the account was simple using Local users and groups management tool. After renaming the user, the user home folder was not renamed and I could not find any information about how to change user home folder. I found the ProfileList registry folder but maybe there is a command line for doing such changes. HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList Any help is much appreciated. Thanks!

    Read the article

  • PHP won't load php.ini

    - by Chuck
    I am racking my brain here and I must be doing something really stupid. I'm trying to setup PHP on Win2008 R2/IIS 7.5. I unpacked php538 into c:\php538 and renamed the php.ini-development to php.ini. Then i tried going to a command prompt and running: c:\php358\php -info I get: Configuration File (php.ini) Path => C:\windows Loaded Configuration File => (none)Scan this dir for additional .ini files => (none) Additional .ini files parsed => (none) Loaded Configuration File => (none) Scan this dir for additional .ini files => (none) Additional .ini files parsed => (none) I have tried using php5217. I tried putting php.ini in c:\windows. I tried creating the PHPRC envrionment variable and pointing it to c:\php358. Every time I have the same problem. PHP does not find or load the ini file. If I run: c:\php358\php -c php.ini -info Then it will load the file. But I shouldn't have to do this for PHP to find the file in the same directory, in the Windows directory, or using the environment variable, so I'm stumped. When I try to run PHP from IIS I get a 500 error and I can only assume at this time it is because it can't find and load the php.ini file correctly. I see similar questions on here, but none seem to address the problem I am having.

    Read the article

  • Windows 7 KSOD On Login

    - by Brandon Bertelsen
    For those that are unaware, KSOD means blacK Screen of Death. Essentially, when windows starts my computer shows only the cursor and a black screen. It seems like any and all shell elements are disabled (or perhaps not started). I have seen a number of these questions asked, none of which have matched my situation. CTRL + ALT + ... does not respond Restarting in safe mode, results in the same KSOD sfc /scannow seems to have no effect when typed at the command prompt that is accessed using the recovery tools via the install disk Update to item 3: sfc /scannow reports: There is a system repair pending which requires reboot to complete. Restart Windows and run sfc again. However, Windows does not restart past KSOD. Update to item 3 as per Soandos comment re: /offbootdir sfc /scannow /offbotdir=e:\ /windir=e:\windows "Windows resource protection found corrupt files but was unable to fix some of them. Details are included in the CBS.log..."

    Read the article

  • Jailbreak Your Kindle for Dead Simple Screensaver Customization

    - by Jason Fitzpatrick
    If you’re less than delighted with the default screensaver pack on the Kindle relief is just a simple hack and a reboot away. Read on to learn how to apply a painless jailbreak to your Kindle and create custom screensavers. Unlike jailbreaking other devices like the iPad and Android devices—which usually includes deep mucking about in the guts of your devices and the potential, however remote, for catastrophic bricking—jailbreaking the Kindle is not only extremely safe but Amazon, by releasing the Kindle sourcecode, has practically approved the process with a wink and a nod. Installing the jailbreak and the screensaver hack to replace the default screensavers is so simple we promise you’ll spend 1000% more time messing around making fun screensaver images than you will actually installing the hack. The default screensaver pack for the Amazon Kindle is a collection of 23 images that include portraits of famous authors, woodcarvings from centuries past, blueprints, book reliefs, and other suitably literature-oriented subjects. If you’re not a big fan of the pack—and we don’t blame you if, despite Emily Dickinson being your favorite single lady, you want to mix things up—it’s extremely simple to replace the default screen saver pack with as many custom images as your Kindle can hold. This hack works on every Kindle except the first generation; we’ll be demonstrating it on the brand new Kindle 3 with accompanying notes to direct users with older Kindles. Latest Features How-To Geek ETC The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography Happy Snow Bears Theme for Chrome and Iron [Holiday] Download Full Command and Conquer: Tiberian Sun Game for Free Scorched Cometary Planet Wallpaper Quick Fix: Add the RSS Button Back to the Firefox Awesome Bar Dropbox Desktop Client 1.0.0 RC for Windows, Linux, and Mac Released Hang in There Scrat! – Ice Age Wallpaper

    Read the article

  • Simple Branching and Merging with SVN

    Its a good idea not to do too much work without checking something into source control.  By too much work I mean typically on the order of a couple of hours at most, and certainly its a good practice to check in anything you have before you leave the office for the day.  But what if your changes break the build (on the build server you do have a build server dont you?) or would cause problems for others on your team if they get the latest code?  The solution with Subversion is branching and merging (incidentally, if youre using Microsoft Visual Studio Team System, you can shelve your changes and share shelvesets with others, which accomplishes many of the same things as branching and merging, but is a bit simpler to do). Getting Started Im going to assume you have Subversion installed along with the nearly ubiquitous client, TortoiseSVN.  See my previous post on installing SVN server if you want to get it set up real quick (you can put it on your workstation/laptop just to learn how it works easily enough). Overview When you know you are going to be working on something that you wont be able to check in quickly, its a good idea to start a branch.  Its also perfectly fine to create the branch after-the-fact (have you ever started something thinking it would be an hour and 4 hours later realized you were nowhere near done?).  In any event, the first thing you need to do is create a branch.  A branch is simply a copy of the current trunk (a typical subversion setup has root directories called trunk, tags, and branches its a good idea to keep this and to put your branches in the branches folder).  Once you have a new branch, you need to switch your working copy so that it is bound to your branch.  As you work,  you may want to merge in changes that are happening in the trunk to your branch, and ultimately when you are done youll want to merge your branch back into the trunk.  When done, you can delete your branch (or not, but it may add clutter).  To sum up: Create a new branch Switch your local working copy to the new branch Develop in the branch (commit changes, etc.) Merge changes from trunk into your branch Merge changes from branch into trunk Delete the branch Create a new branch From the root of your repository, right-click and select TortoiseSVN > Branch/tag as shown at right (click to enlarge).  This will bring up the Copy (Branch / Tag) interface.  By default the From WC at URL: should be pointing at the trunk of your repository.  I recommend (after ensuring that you have the latest version) that you choose to make the copy from the HEAD revision in the repository (the first radio button).  In the To URL: textbox, you should change the URL from /trunk to /branches/NAME_OF_BRANCH.  You can name the branch anything you like, but its often useful to give it your name (if its just for your use) or some useful information (such as a datestamp or a bug/issue ID from that it relates to, or perhaps just the name of the feature you are adding. When youre done with that, enter in a log message for your new branch.  If you want to immediately switch your local working copy to the new branch/tag, check the box at the bottom of the dialog (Switch working copy to new branch/tag).  You can see an example at right. Assuming everything works, you should very quickly see a window telling you the Copy finished, like the one shown below: Switch Local Working Copy to New Branch If you followed the instructions above and checked the box when you created your branch, you dont need to do this step.  However, if you have a branch that already exists and you would like to switch over to working on it, you can do so by using the Switch command.  Youll find it in the explorer context menu under TortoiseSVN > Switch: This brings up a dialog that shows you your current binding, and lets you enter in a new URL to switch to: In the screenshot above, you can see that Im currently bound to a branch, and so I could switch back to the trunk or to another branch.  If youre not sure what to enter here, you can click the [] next to the URL textbox to explore your repository and find the appropriate root URL to use.  Also, the dropdown will show you URLs that might be a good fit (such as the trunk of the current repository). Develop in the Branch Once you have created a branch and switched your working copy to use it,  you can make changes and Commit them as usual.  Your commits are now going into the branch, so they wont impact other users or the build server that are working off of the trunk (or their own branches).  In theory you can keep on doing this forever, but practically its a good idea to periodically merge the trunk into your branch, and/or keep your branches short-lived and merge them back into the trunk before they get too far out of sync. Merge Changes from Trunk into your Branch Once you have been working in a branch for a little while, change to the trunk will have occurred that youll want to merge into your branch.  Its much safer and easier to integrate changes in small increments than to wait for weeks or months and then try to merge in two very different codebases.  To perform the merge, simply go to the root of your branch working copy and right click, select TortoiseSVN->Merge.  Youll be presented with this dialog: In this case you want to leave the default setting, Merge a range of revisions.  Click Next.  Now choose the URL to merge from.  You should select the trunk of your current repository (which should be in the dropdownlist, or you can click the [] to browse your repository for the correct URL).  You can leave everything else blank since you want to merge everything: Click Next.  Again you can leave the default settings.  If you want to do something more granular than everything in the trunk, you can select a different Merge depth, to include merging just one item in the tree.  You can also perform a Test merge to see what changes will take place before you click Merge (which is often a good idea).  Heres what the dialog should look like before you click Merge: After clicking Merge (or Test merge) you should see a confirmation like this (it will say Test Only in the title if you click Test merge): Now you should build your solution, run all of your tests, and verify that your branch still works the way it should, given the updates that youve just integrated from the trunk.  Once everything works, Commit your changes, and then continue with your work on the branch.  Note that until you commit, nothing has actually changed in your branch on the server.  Other team members who may also be working in this branch wont be impacted, etc.  The Merge is purely a client-side operation until you perform a Commit. In a more real-world scenario, you may have conflicts.  When you do, youll be presented with a dialog like this one: Its up to you which option you want to go with.  The more frequently you Merge, the fewer of these youll have to deal with.  Also, be very sure that youre merging the right folders together.  If you try and merge your trunk with some subfolder in your branchs structure, youll end up with all kinds of conflicts and problems.  Fortunately, theyre only on your working copy (unless you commit them!) but if you see something like that, be sure to doublecheck your URL and your local file location. Merge Your Branch Back Into Trunk When youre done working in your branch, its time to pull it back into the trunk.  The first thing you should do is follow the previous steps instructions for merging the latest from the trunk into your branch.  This lets you ensure that what you have in your branch works correctly with the current trunk.  Once youve done that and committed your changes to your branch, youre ready to proceed with this step. Once youre confident your branch is good to go, you should go to its root folder and select TortoiseSVN->Merge (as above) from the explorer right-click menu.  This time, select Reintegrate a branch as shown below: Click Next.  Youll want it to merge with the trunk, which should be the default: Click Next. Leave the default settings: Click Test merge to see a test, and then if all looks good, click Merge.  Note that if you havent checked in your working copy changes, youll see something like this: If on the other hand things are successful: After this step, its likely you are finished working in your branch.  Dont forget to use the ToroiseSVN->Switch command to change your working copy back to the trunk. Delete the Branch You dont have to delete the branch, but over time your branches area of your repository will get cluttered, and in any event if theyre not actively being worked on the branches are just taking up space and adding to later confusion.  Keeping your branches limited to things youre actively working on is simply a good habit to get into, just like making sure your codebase itself remains tidy and not filled with old commented out bits of code. To delete the branch after youre finished with it, the simplest thing to do is choose TortoiseSVN->Repo Browser.  From there, assuming you did this from your branch, it should already be highlighted.  In any event, navigate to your branch in the treeview on the left, and then right-click and select Delete.  Enter a log message if youd like: Click OK, and its gone.  Dont be too afraid of this, though.  You can still get to the files by viewing the log for branches, and selecting a previous revision (anything before the delete action): If for some reason you needed something that was previously in this branch, you could easily get back to any changeset you checked in, so you should have absolutely no fear when it comes to deleting branches youre done with.   Resources If youre using Eclipse, theres a nice write-up of the steps required by Zach Cox that I found helpful here. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Path is too long

    - by kaleidoscope
    Bugged by the irritating "Path is too long after being fully qualified" error while running in the Development Fabric? The solution is pretty funny and not so obvious unfortunately. The culprit here is not your app, but the Development Fabric. The DevFab accumulates a lot of temporary junk comprising of local storage locations, cached binaries, configuration, diagnostics information and cached compiled web site content files over its lifetime. They are typically stored at C:\Users\<username>\AppData\Local\dftmp. The Azure Tools will periodically clean this up, but some time you have to play janitor and take the law in your hands ;). The csrun.exe has quite a few tricks up its sleeve. One of them is the ability to clean the development fabric's temporary junk accumulated over time. You can do this by  running the Azure command prompt with elevated privileges and running csrun.exe /devfabric:shutdown and then csrun.exe /devfabric:clean If the problem still persists then the application directory structure could indeed be too long. A workaround to this is changing the Development Fabric temporary directory to point to a shorter path. The temporary directory path can be addressed by an environment variable _CSRUN_STATE_DIRECTORY. You can try setting its value to something like "C:\WA" or "C:\A" this will reduce some 25+ characters from your path. Do not forget to close Visual Studio and expressly shutdown the dev fab with csrun.exe /devfabric:shutdown (Under elevated privileges of course). Source: http://geekswithblogs.net/IUnknown/archive/2010/02/03/no-more-path-is-too-long.aspx  :D   Sarang, K

    Read the article

  • Remotely sync Time Machine drives

    - by Off Rhoden
    I have an Xserve that runs Time Machine to a local terabyte drive. I also connected my external terabyte drive for a time period and had Time Machine use it to establish the seed data. I plan to take my drive back home with me (out of state) and have the Xserve return to using its local drive for Time Machine. But when I get back home, is there a way to keep my external drive's copy of the Time Machine Backups folder in sync with the Backups folder back on the Xserve? I'm wanting a full copy of the history (makes an awesome remote backup). I've thought of using the unix command rsync. In fact, that's how I had been doing it but I was looking the compactness that Time Machine was able to achieve. Thanks.

    Read the article

< Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >