Search Results

Search found 22900 results on 916 pages for 'pascal case'.

Page 436/916 | < Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >

  • Passing the output of the last command to sed as an argument

    - by neurolysis
    Hi, Basically, I'm wanting to automate adding something to xorg.conf in the right place, I've used some commands to get the line number of the line I want to manipulate, but I'm not really sure how to go about passing this line number (as an argument and NOT something to be manipulated) to sed. I have been told about xargs and looked at the docs on it, but after some reading and experimentation I can't seem to get it to work. In case anyone can think of a better method entirely, the process I want to automate is just finding the line containing both "Identifier" and "Monitor0" (there will only be one) and adding a line below it. The problem with just finding Monitor0 and manipulating that line is that there are multiple lines with Monitor0 in. I've got this far: fgrep -n "Monitor0" </etc/X11/xorg.conf | fgrep "Identifier" | cut -f1 -d: This gives out the line number which I'm wanting to pass to sed, but I'm not really sure how to do it. ...or is there a simpler way which I'm not seeing? Thanks. :)

    Read the article

  • "Misaligned partition" - Should I do repartition (how?)

    - by RndmUbuntuAmateur
    Tried to install Ubuntu 12.04 from USB-stick alongside the existing Win7 OS 64bit, and now I'm not sure if install was completely successful: Disk Utility tool claims that the Extended partition (which contains Ubuntu partition and Swap) is "misaligned" and recommends repartition. What should I do, and if should I do this repartition, how to do it (especially if I would like not to lose the data on Win7 partition)? Background info: A considerably new Thinkpad laptop (UEFI BIOS, if that matters). Before install there were already a "SYSTEM_DRV" partition, the main Windows partition and a Lenovo recovery partition (all NTFS). Now the table looks like this: SYSTEM_DRV (sda1), Windows (sda2), Extended (sda4) (which contains Linux (sda5; ext4) and Swap (sda6)) and Recovery (sda3). Disk Utility Tool gives a message as follows when I select Ext: "The partition is misaligned by 1024 bytes. This may result in very poor performance. Repartitioning is suggested." There were couple of problems during the install, which I describe below, in the case they happen to be relevant. Installer claimed that it recognized existing OS'es fine, so I checked the corresponding option during the install. Next, when it asked me how to allocate the disk space, the first weird thing happened: the installer give me a graphical "slide" allocate disk space for pre-existing Win7 OS and new Ubuntu... but it did not inform me which partition would be for Ubuntu and which for Windows. ..well, I decided to go with the setting installer proposed. (not sure if this is relevant, but I guess I'd better mention it anyway - the previous partition tools have been more self-explanatory...) After the install (which reported no errors), GRUB/Ubuntu refused to boot. Luckily this problem was quite straightforwardly resolved with live-Ubuntu-USB and Boot-Repair ("Recommended repair" worked just fine). After all this hassle I decided to check the partition table "just to be sure"- and the disk utility gives the warning message I described.

    Read the article

  • How to tell start-stop-daemon to update $HOME and $USER accordingly to --chuid parameter

    - by iElectric
    I'm trying to run a service that uses $HOME and $USER environment variables. I could set them in service itself, but that would only be a temporary solution. Let's say I have a script test.sh with following content: echo $USER And I run it with start-stop-daemon to see my results: $ start-stop-daemon --start --exec `pwd`/test.sh --user guest --group guest --chuid -guest root Seems like it does not update environment, maybe that should be reported as a bug? I have found a nasty hacky solution, which only works (for unknown reason) on my this simple use case: $ start-stop-daemon --exec /usr/bin/sudo --start -- -u guest -i 'echo $USER' guest I'm sure someone else stumbled upon this, I'm interested in clean solution. $ start-stop-daemon --version start-stop-daemon 1.13.11+gentoo

    Read the article

  • Diff -b and -w difference

    - by dotancohen
    From the diff manpage: -b, --ignore-space-change ignore changes in the amount of white space -w, --ignore-all-space ignore all white space From this, I infer that the difference between the -b and -w options must be that -b is sensitive to the type of whitespace (tabs vs. spaces). However, that does not seem to be the case: $ diff 1.txt 2.txt 1,3c1,3 < Four spaces, changed to one tab < Eight Spaces, changed to two tabs < Four spaces, changed to two spaces --- > Four spaces, changed to one tab > Eight Spaces, changed to two tabs > Four spaces, changed to two spaces $ diff -b 1.txt 2.txt $ diff -w 1.txt 2.txt $ So, what is the difference between the -b and -w options? Tested with diffutils 3.2 on Kubuntu Linux 13.04.

    Read the article

  • Should I care about Junit redundancy when using setUp() with @Before annotation?

    - by c_maker
    Even though developers have switched from junit 3.x to 4.x I still see the following 99% of the time: @Before public void setUp(){/*some setup code*/} @After public void tearDown(){/*some clean up code*/} Just to clarify my point... in Junit 4.x, when the runners are set up correctly, the framework will pick up the @Before and @After annotations no matter the method name. So why do developers keep using the same conventional junit 3.x names? Is there any harm keeping the old names while also using the annotations (other than it makes me feel like devs do not know how this really works and just in case, use the same name AND annotate as well)? Is there any harm in changing the names to something maybe more meaningful, like eachTestMethod() (which looks great with @Before since it reads 'before each test method') or initializeEachTestMethod()? What do you do and why? I know this is a tiny thing (and may probably be even unimportant to some), but it is always in the back of my mind when I write a test and see this. I want to either follow this pattern or not but I want to know why I am doing it and not just because 99% of my fellow developers do it as well.

    Read the article

  • Web server suddenly stopped working

    - by wezten
    I have a web server, which was working fine. It also was an FTP server and a Windows Remote Desktop server, all working fine. Someone called our ISP to increase the internet speed, and suddenly nothing works - I can connect with Teamviewer, but HTTP, FTP & RD doesn't work. Disabled firewall. Ran Wireshark - the packets don't come through at all. Set the webserver to port 20111, in case the ISP is blocking port 80, and again, the packets didn't come through at all. (localhost:20111 works fine) Port forwarding is set up for ports 80, 21, 3389 & 20111 to 10.0.0.32 (which is the correct address - checked with ipconfig). Restarted router and computer. I would be very grateful for any help.

    Read the article

  • How to start Ubuntu with no working video card?

    - by ViliusK
    I have a laptop with broken video card. It has two operating systems installed - Windows 7 and Ubuntu 10.10 Desktop Edition. It has GRUB to manage which operating system to boot up. Windows is default OS. And Windows fails to boot up without video card. I'm checking with ping to the ports which are shown as used by DHCP in my router. Normal boot of Ubuntu also fails and it restarts after a while. But when I choose (blindly, but checking HDD indicator and by counting button presses when GRUB menu appears) to boot second option of Ubuntu (rescue mode) it starts and I can ping it. But when I try to connect to it through SSH, I'm getting "connection refused" error from putty. I've took out HDD from my laptop already and inserted it to WD Passport case so now I can connect it to other computer to edit configuration files. How can I check if SSH server is working? How to enable it in rescue mode? Or better, how to disable video card to be required while booting Ubuntu in normal mode?

    Read the article

  • How to install Gitlab in a VM on a production server?

    - by Michaël Perrin
    I have a production server running Ubuntu 12.04 and I would like to install on it a VM with Gitlab (using Vagrant and Virtualbox). Let's say that the address to access Gitlab is gitlab.mydomain.com . The DNS zone has been configured to point to the IP address of the server. I want users to be able to access to Gitlab (either for pushing to a repository, or for accessing to the web interface) from the outside. The VM has been configured to have an IP address. It means that when browsing http://gitlab.mydomain.com for instance, the request has to be forwarded to the VM on the server, ie. to the VM IP address. What are the ways to configure this? Can Apache be used as a proxy? In this case, I guess it only works for HTTP requests, but not for pushing to a Git repository on the VM.

    Read the article

  • Late Model 2011 Macbook Pro with SSD appears to be off somehow

    - by chris
    Ok, I just got a SSD for my Macbook Pro Late-2011. The specs from what I read are that the laptop is capable of 6gbps, so I got myself a OZC Agility 240gb 6gbps SSD. Decided to join the club and speed test it with Blackmagic Disk Speed Test.. and the results are equivilent to that of a 3gbps setup. So.. I am wondering overall is there a configuration setting somewhere I can tweak? The original HD was a 500gb HDD the spinning kind. So I'm figuring maybe thats why there may be a setting somewhere hidden I dunno about that I can tweak, just wanna see if anyone else knows if this is the case. edit should also mention did a fresh factory install, nothing carried over from original hd

    Read the article

  • After reboot allocated node gets commissioned again

    - by cloudfan
    i had set up a maas with juju and deployed Openstack into it for testing. During my vacation i shut down all computers. Afterwards i started at first the maas server, then the node where juju was bootstrapped and juju-gui was deployed to. Sadly the node got commissioned again and so all my deployments are gone. I decomissioned the according node from the maas and bootstrapped it again. Afterwards i tested again juju bootstrapping the node, shutting down both nodes and starting them in the same order again. The Juju node gets commisioned again. After bootstrapping everything looked fine in the MAAS GUI (node was set to allocated to root, which was also the case after the restart) and also the JUJU GUI was available and juju status worked fine. Before my vacation i also had some other nodes deployed through juju. They all seem to be still available and have not been commisioned again. Do you have any ideas what might have happened? Is there any issue with a bootstrapped juju node and the commisioning? Any help or hints on what i could check are appreciated! Thank in advance for your help!

    Read the article

  • Quantify value for management

    - by nivlam
    We have two different legacy systems (window services in this case) that do exactly the same thing. Both of these systems have small differences for the different applications they serve. Both of these system's core functionality lies within a shared library. Most of the time, the updates occur in the shared library and we simply deploy the updated library to both of these systems. The systems themselves rarely change. Since both of these systems do essentially the same thing, our development team would like to consolidate these two systems into a single service. What can I do to convince management to allocate time for such a task? Some of the points I've noted are: Easier maintenance Decrease testing/QA time Unfortunately, this isn't enough. They would like us to provide them with hard numbers on the amount of hours this will save in the future and how this will speed up future development. Since most of the work is done in the shared library and the systems themselves never change, it's hard for us to quantify how many hours this will save. What kind of arguments can I make to justify the extra work to consolidate these systems?

    Read the article

  • Solving the puzzle in javascript [on hold]

    - by Gandalf StormCrow
    I've recently try to brush up my javascript skills, so I have a friend who gives me puzzles from time to time to solve. Yesterday I got this : function testFun() { f = {}; for( var i=0 ; i<3 : i++ ) { f[i] = function() { alert("sum='+i+f.length); } } return f; } Expected Results: testFun()[0]() should alert “sum=0” testFun()[1]() should alert “sum=2” testFun()[2]() should alert “sum=4” I did this which does like requested above: function testFun() { var i, f = {}; for (i = 0; i < 3; i++) { f[i] = (function(number) { return function() { alert("sum=" + (number * 2)); } }(i)); } return f; } Today I got new puzzle : Name everything wrong with this javascript code, then tell how you would re-write it. function testFun(fInput) { f = fInput || {}; // append three functions for( var i=0 ; i<3 : i++ ) { f[i] = function() { alert("sum='+i+f.length); } } return f; } // Sample Expected Results (do not change) myvar = testFun(); myvar[0](); // should alert “sum=0” myvar[1](); // should alert “sum=2” testFun(['a'])[2](); // should alert “sum=5”`enter code here How do I accomplish the third case testFun(['a'])[2]()? Also could my answer from yesterday be written better and what can be improved if so?

    Read the article

  • Backup of whole harddrive during full operation with Acronis True Image Home 2010

    - by testing
    Currently I'm creating a backup of one of my hard drives. It's my main hard drive, where the operating system is running on. Because the backup is done during full operation I'm asking me if the backup really includes all files (registry, ...). Can I restore the backup on another hard drive and then run the operating system again without problems? Normally I would say that you have to boot from a CD (without running OS) to make a backup. I made a Google research but I didn't found my case so far.

    Read the article

  • How to achieve redundancy across data centers?

    - by BrandonBT
    I have a LAMP server with a lot of hardware redundancy built in. I am not worried about the server becoming unavailable. What I am worried about, however, are potential network issues in the data center the server is in. What I would like to have is another server in another data center for redundancy. Load balancing is less of a concern. With that said, I am relatively clueless on two points: How to have two servers in two geographically separate data centers that have exactly the same data, in terms of both files and MySQL databases. How to ensure that all traffic coming into one data center are automatically transferred to the other database in the case of a network or server failure at the first data center. Any guidance on how to accomplish the above two problems would be greatly appreciated.

    Read the article

  • Deploying InfoPath forms &ndash; idiosyncrasies

    - by PointsToShare
    Well, I have written a sophisticated PowerShell script to expedite the deployment of InfoPath forms - .XSN file.  Along the way by way of trial and error (mostly error and error), I discovered a few little things. Here they are. •    Regardless of how the install command is run – PowerShell or the GUI in Central Admin – SharePoint enwraps the XSN inside a solution – WSP, then installs and deploys the solution. •    The solution is named by concatenating “form-“ with the first 16 characters (or less if the file name is shorter than 16) of the file name and the required WSP at the end. So if the form name was MyInfopathForm.xsn the solution name will be form-MyInfopathForm.wsp, but for WithdrawalOfRequestsForRefund.xsn it will be named form-WithdrawalOfRequ.wsp •    It only gets worse! Had there already been a solution file with the same name, Microsoft appends a three digit number to the name, like MyInfopathForm-123.wsp. Remember a digit is a finger, I suspect a middle finger, so when you deploy the same form – many versions of it, or as it was in my case – testing a script time and again, you’ll end up with many such digit (middle finger) appended solutions, all un-deployed except the last one. This is not a bug. It’s a feature!   Well, there are ways around it. When by hand, remove the solution from the solution store before deploying the form again. In the script I do the same thing. And finally - an important caveat; Make sure that all your form names are unique in the first 16 characters. If you also have a form with the name forWithdrawalOfRequestForRelief.xsn, you’re in trouble! That’s all folks!

    Read the article

  • How can I redirect all files in a directory that doesn't conform to a certain filename structure?

    - by user18842
    I have a website where a previous developer had updated several webpages. The issue is that the developer had made each new webpage with new filenames, and deleted the old filenames. I've worked with .htaccess redirects for a few months now, and have some understanding of the usage, however, I am stumped with this task. The old pages were named like so: www.domain.tld/subdir/file.html The new pages are named: www.domain.tld/subdir/file-new-name.html The first word of all new files is the exact name of the old file, and all new files have the same last 2 words. www.domain.tld/subdir/file1-new-name.html www.domain.tld/subdir/file2-new-name.html www.domain.tld/subdir/file3-new-name.html ect. We also need to be able to access the url: www.domain.tld/subdir/ The new files have been indexed by google (the old urls cause 404s, and need redirected to the new so that google will be friendly), and the client wants to keep the new filenames as they are more descriptive. I've attempted to redirect it in many different ways without success, but I'll show the one that stumps me the most RewriteBase / RewriteCond %{THE_REQUEST} !^subdir/.*\-new\-name\.html RewriteCond %{THE_REQUEST} !^subdir/$ RewriteRule ^subdir/(.*)\.html$ http://www.domain.tld/subdir/$1\-new\-name\.html [R=301,NC] When visiting www.domain.tld/subdir/file1.html in the browser, this causes a 403 Forbidden error with a url like so: www.domain.tld/subdir/file1-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name.html I'm certain it's probably something simple that I'm overlooking, can someone please help me get a proper redirect? Thanks so much in advance! EDIT I've also got all the old filenames saved on a separate document in case I need them set up like the following example: (file(1|2|3|4|5)|page(1|2|3|4|5)|a(l(l|lowed|ter)|ccept)

    Read the article

  • Is having a single `IndexWriter` instance in Lucene a good idea?

    - by Dragos
    I am trying to understand how Lucene should be used. From what I have read, creating an IndexReader is costly, so using a Search Manager shoulg be the right choice. However, a SearchManager should be produced by a NRTManager(which, by the way, should replace the IndexWriter for every add or delete operation performed). But in order to have a NRTManager, I should first have an IndexWriter, and here comes my problem. The documentation says: an IndexWriter is thread-safe the constructor of this class takes a Directory object, so it seems creating an instace should be costly(as in the case of an IndexReader) all changes are buffered and flushed periodically(so they seem to encourage using a single instance) but: the changes, although flushed will only be visible after commit or close after finished making updates(add/delete), the instance should be closed I also found this: http://stackoverflow.com/questions/5374419/forgot-to-close-the-lucene-indexwriter-after-adding-documents-to-the-index where it is said that not closing a writer might ruin everything So what am I really supposed to do? Is having a single IndexWriter instance a good idea (make only commit and never close it)? EDIT: What is more, if I use NRTManager, how can I make acommit`? Is it even possible?

    Read the article

  • How to kill an openvz container?

    - by johannes
    An openvz container can be stopped with vzctl stop <id> , but this needs the cooperation from the init inside the container. In case a container is compromised a way is needed to stop the container withouts its cooperation. Something like a vzctl kill <id> is needed which kills all processes inside the container and puts it into the stopped state. Such a kill command is not listed in the manpage. How can an openvz container be killed/stopped without needing it's cooperation?

    Read the article

  • Laptop won't load an OS or even an OS setup.

    - by Talasan Nicholson
    My fiancé's laptop started to crash and would get BSOD errors to the point that she couldn't even boot up because it would either take forever or just crash. So I figured I'd try to reformat, as it's done this before and this was the solution last time. On Windows 7, it will not go passed the "Setup is starting..." screen. On Windows XP, it crashes once the setup is about to start (BSOD). I shut it down and let it sit for 5+ hours, in case it was overheating, but it's doing the same thing anyway. By now, I figure it's something to do with the hardware but I hope not. Any ideas?

    Read the article

  • Visits-PageViews-Bounce Rate-New Visitors-Visit Duration (Google Analytics), which one is top priority for seo?

    - by HOY
    This is the case: My site is getting a lot of trafic from an image (a company logo image) because this image is ranked 1.st in google search results for a company's title. (I have no idea how that happened) This image is must for my website, but it is not relevant with site content so irrelevant people search for the image and finds out about my site, so that I get interesting statistics: http://postimage.org/image/3oyvrjoz9/ Pros: Total Visits & Avg. New Visits Cons: Avg. Page/Visit, Avg. Visit Duration, Bounce Rate In summary I am confused if this image is helpful to my website ? Because I don't know the balance between those 5 statistics P.S: My website is 2 months old, and we are working on seo at the moment Another P.S: Kindly ask you to not provide assumtions, because I also have assumptions, I need real knowledge. Edit: Search Keyword is: arcelik logo Search Site: google.com.tr Search URL: https://www.google.com.tr/search?hl=en&q=arcelik+logo&bav=on.2,or.r_gc.r_pw.r_qf.&bvm=bv.41524429,d.Yms&biw=1366&bih=667&um=1&ie=UTF-8&tbm=isch&source=og&sa=N&tab=wi&ei=oZIDUfutAseVswa9zYHwCw

    Read the article

  • Space Invaders-type game: Keeping the enemies aligned with each other as they turn around?

    - by CorundumGames
    OK, so here's the lowdown of the problem I'm trying to solve. I'm developing a game in PyGame that's a cross between Space Invaders and Columns. I'm trying to make the motion of the enemies similar to that of the aliens in Space Invaders; that is, they're all clustered in a grid, and if even one hits the side of the screen, the entire formation moves down and turns around. However, the motion of these aliens is continuous (as continuous as a monitor can be, anyway), not on a discrete grid like in the original. The enemies are instances of an Enemy class, and in turn they're held by a 2D array in a enemysquadron module (which, if you don't use Python, is in this case essentially a singleton due to the way Python modules work). Inside the Enemy class I have a class-scope velocity vector that is reversed every time an Enemy object touches the edge of the screen. This won't do, though, because as time goes on the enemies just become disorganized and jumbled (i.e. not in a grid as planned). I haven't implemented the Enemies going downward yet, so let's not worry about that right now. Any tips?

    Read the article

  • Extend university wifi network [migrated]

    - by asfasdoiuh ouhouhouh
    i live in a university campus and i can get wifi signal on the outside of my window but not in the house. The solution i use at the moment is a usb wifi dongle outside connected to my laptop but the lack of an internal antenna make the connection quite unreliable at times. So i was trying to find another solution to improve the reception of my network. One idea is to setup a router on the outside (in a place with stronger signal) and redirect the connection inside the house with an ethernet cable but the problem is that our Uni Wifi is managed by a capitve portal (BlueSocket with DNS redirection to login page) and the authentication has to happen on the mac address that connect to the net (so the client appliance in this case). If I use a router with Mac-Clone capability i will be able to be redirected trough the captive portal on my laptop computer and login from there or i need to setup my router to fill in the login page by itself? There are other hardware/software solutions i can use to get what i want? Thank you all

    Read the article

  • Understanding interfaces [closed]

    - by user985482
    Possible Duplicate: When to use abstract classes instead of interfaces and extension methods in C#? Why are interfaces useful? What is the point of an interface? What other reasons are there to write interfaces rather than abstract classes? What is the point of having every service class have an interface? Is it bad habit not using interfaces? I am reading Microsoft Visual C# 2010 Step by Step which I feel it is a very good book on introducing you to the C# language. I have just finished reading a chapter on interfaces and although I understood the syntax of creating and using interfaces I have trouble of understanding the point on why should I use them? Correct me If I am wrong but in an interface you can only declare methods names and parameters.The body of the method should be declared in the class that inherits the interface. So in this case why should I declare an interface if I am going to declare the entire method in the class that inherits that interface? What is the point? Does this have something to do with the fact that a class can inherit multiple interfaces?

    Read the article

  • How do I get a Dell Latitude e6420 working?

    - by David_G
    I've just installed Ubuntu 12.04 (64-bit) on a brand new Dell Latitude e6420, and I'm having a few problems. This laptop has an Optimus (?) setup - i.e. integrated gfx and an Nvidia Quadro NVS 4200M. First problem - I ran setup, etc, and discovered that I can only run unity2d - If I try and login with unity3d, it just defaults to 2d. This is with nvidia-current installed (302.07). Note also that I can't run nvidia-settings ("You do not appear to be using the NVIDIA X driver."), and there is no additional drivers found ("No proprietary drivers are in use on this system"). I tried to troubleshoot this, and removed nvidia, leaving (I guess) just Nouveau drivers - In that case, unity3d did work, but I was stuck with the open source Nouveau drivers powering the integrated graphics. So, obviously, I want to run unity3d, and be using the more powerful Nvidia graphics card. I've tried a bit of tinkering around, but I'm not sure the best way to proceed, or perhaps more importantly, I'm not sure of what the best final solution might be. I've heard about bumblebee - but frankly, I would prefer to have the proprietary Nvidia drivers working properly. Any help would be much appreciated!

    Read the article

  • Computer Says No: Mobile Apps Connectivity Messages

    - by ultan o'broin
    Sharing some insight into connectivity messages for mobile applications. Based on some recent ethnography done my myself, and prompted by a real business case, I would recommend a message that: In plain language, briefly and directly tells the user what is wrong and why. Something like: Cannot connect because of a network problem. Affords the user a means to retry connecting (or attempts automatically). Mobile context of use means users use anticipate interruptibility and disruption of task, so they will try again as an effective course of action. Tells the user when connection is re-established, and off they go. Saves any work already done, implicitly. (Bonus points on the ADF critical task setting scale) The following images showing my experience reading ADF-EMG Google Groups notification my (Android ICS) Samsung Galaxy S2 during a loss of WiFi give you a good idea of a suitable kind of messaging user experience for mobile apps in this kind of scenario. Inline connection lost message with Retry button Connection re-established toaster message The UX possible is dependent on device and platform features, sure, so remember to integrate with the device capability (see point 10 of this great article on mobile design by Brent White and Lynn Hnilo-Rampoldi) but taking these considerations into account is far superior to a context-free dumbed down common error message repurposed from the desktop mentality about the connection to the server being lost, so just "Click OK" or "Contact your sysadmin.".

    Read the article

< Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >