Search Results

Search found 43847 results on 1754 pages for 'command line arguments'.

Page 467/1754 | < Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >

  • Down for everyone or just me?

    - by Click Ok
    When I try access a website, and it is down, I head to http://www.downforeveryoneorjustme.com, and test it. But lately, my home network PCs cannot access facebook.com, and I tried the that service and the answer was: It's just you. http://facebook.com is up. Ok, that got me. I tried several browsers and 3 PCs in my LAN and it don't works. I don't know how to troubleshoot this. What some step-by-step to troubleshoot that problem? Output from ping command: Disparando facebook.com [69.171.234.21] com 32 bytes de dados: Resposta de 69.171.234.21: bytes=32 tempo=256ms TTL=245 Resposta de 69.171.234.21: bytes=32 tempo=255ms TTL=246 Resposta de 69.171.234.21: bytes=32 tempo=251ms TTL=245 Resposta de 69.171.234.21: bytes=32 tempo=255ms TTL=246 PS.: I thank you for the nice help, but then I suppose that the first step of a step-by-step to troubleshoot is ping from command line?

    Read the article

  • Encryption of OS X to Windows SMB traffic and password

    - by Brian
    I connected to a Windows Server 2008 R2 shared folder from a Mac OS X Mountain Lion computer over the Internet using this command: mount -t smbfs //user@server/path/to/share local_folder Is traffic encrypted by default? What settings do I look at (if any) to know whether it was encrypted? If it wasn't encrypted, what's the easiest way to encrypt it? Was the password I typed at the command line encrypted? Update: sysadmin1138 has addressed the password question. Does anyone know how I can tell if the traffic itself is being encrypted?

    Read the article

  • CentOS 6.5 proxy bypass/no_proxy not working

    - by Naruto Uzumaki
    I am running CentOS 6.5 on my desktop. I've set the Network Proxy using the network proxy application provided under Preferences. I've also set the following exceptions: localhost,127.0.0.0/8,172.16.0.0/12,192.168.0.0./16 But whenever I am using wget (I'm testing the proxy settings using using wget) then wget tries to connect to the proxy for private addresses, but wget localhost works fine and doesn't use the proxy. I also removed all the proxy settings and set the proxy in the shell: export http_proxy="<proxy_url>:<port>" export https_proxy="<proxy_url>:<port>" export no_proxy="localhost,127.0.0.0/8,172.16.0.0/12,192.168.0.0./16" It work when I use the command wget <external_url> or wget localhost but fails when I use the command wget <private address from the $no_proxy variable>. I also tried setting the variables in Ubuntu 14.04 also and facing the same issue. Regards,

    Read the article

  • How do you change your Airport or Ethernet MAC address in Mac OS X 10.6?

    - by Dave Gallagher
    I have a MacBook Pro and would like to set a custom MAC address for either my Airport WiFi card, or Ethernet port. In older versions of Mac OS X, you could do it like this: $ sudo ifconfig en0 ether 00:11:22:33:44:55 // Ethernet $ sudo ifconfig en1 lladdr AA:BB:CC:DD:EE:FF // Airport For it to work on Airport, you'd have to power it on (e.g. $ sudo ifconfig en1 up), ensure it's not connected to any wireless network, and execute the command. I'm aware such a change won't propagate across reboots. Unfortunately, this doesn't work on Mac OS X 10.6.6 anymore. Apple appears to have removed the functionality (the command fails silently). Does anyone have any idea how to do it? Thanks for any help you can offer! :)

    Read the article

  • Git Daemon on linux?

    - by bwawok
    Trying to set up a simple git-daemon on a linux server, and talk to it from a windows box. On linux server: Make a folder /home/foo/bar CD to /home/foo/bar do a git --bare init here Do a touch git-daemon-export-ok CD to /home/foo Run the command git-daemon --verbose --reuseaddr --base-path=/home/foo --enable=receive-pack On Windows Client w tortoise Git Do git.exe clone --progress -v "git://servername/bar" "C:\source\myFolderName" (works) Create file a.txt, add it to git, and commit (works) Do a git.exe pull "origin" master and then get fatal: Couldn't find remote ref master (makes sense, master isn't there yet) Do a git.exe push "origin" master:master and tortoise hangs forever without do anything I realize why I can't pull from master yet on the remote branch.. but why can't I push my first commit into the remote repo? #4 really should work. Tried it both with tortoise and the mysysgit command line, both cases I hang forever. What am I missing? Server has no useful log

    Read the article

  • Git Daemon on linux?

    - by bwawok
    Trying to set up a simple git-daemon on a linux server, and talk to it from a windows box. On linux server: Make a folder /home/foo/bar CD to /home/foo/bar do a git --bare init here Do a touch git-daemon-export-ok CD to /home/foo Run the command git-daemon --verbose --reuseaddr --base-path=/home/foo --enable=receive-pack On Windows Client w tortoise Git Do git.exe clone --progress -v "git://servername/bar" "C:\source\myFolderName" (works) Create file a.txt, add it to git, and commit (works) Do a git.exe pull "origin" master and then get fatal: Couldn't find remote ref master (makes sense, master isn't there yet) Do a git.exe push "origin" master:master and tortoise hangs forever without do anything I realize why I can't pull from master yet on the remote branch.. but why can't I push my first commit into the remote repo? #4 really should work. Tried it both with tortoise and the mysysgit command line, both cases I hang forever. What am I missing? Server has no useful log

    Read the article

  • WOL not working

    - by Maciej Swic
    I have a Marvell Yukon integrated NIC and i have installed the WOL package on my freeBSD-based NAS. I'm trying to wake my PC using the command "wol M:A:C:A:D:D:R". Command line spits back that it is "waking" however nothing happens. I found no reference to WOL whatsoever in BIOS and i enabled Magic Packet WOL in windows on that interface. Also double-checked MAC addr and that i entered it in the correct format in "wol". I'm on Windows 7. What next? =/

    Read the article

  • How to enable Unity 3D support in 12.04 using open-source drivers for RadeonHD cards?

    - by martin
    As the title says I can't enable the Unity 3D support when I'm using open-source drivers (xorg-edgers). I have an xfx Radeon HD 6950 by the way. If I install the proprietary 12.3 drivers from AMD it works, but I get poorer 2D performance than the open-source drivers and also I get some freezes and lock ups at random. So because of this I'm trying the open-source drivers and so far no issues at all, except this one. Running this command $ /usr/lib/nux/unity_support_test -p shows this: OpenGL vendor string: VMware, Inc. OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 0x300) OpenGL version string: 2.1 Mesa 8.0.2 Not software rendered: no Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no And this command $ lspci -nn | grep VGA shows: 01:00.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI Cayman PRO [Radeon HD 6950] [1002:6719] So, is this normal? Do I need to go back to proprietary drivers to enable Unity 3D? If anyone can give me help, I'll much appreciate it.

    Read the article

  • Studies of Pair Programming on Translation Projects

    - by gmletzkojr
    I am looking for information (ie, studies, metrics, etc) for pair programming when translating a project from an "older" language to a "newer" language. In this particular case, translating means line for line translation where ever possible, and only modifying the design when absolutely necessary, not when the modification would provide improved performance. I have performed pair programming in new development, and I am well aware of the pros and cons of pairing in that environment. However, I haven't been able to find any information in this particular case. Any help is appreciated.

    Read the article

  • PASS Summit 2013 - A Bunch of Blog Posts Recently

    - by RickHeiges
    Recently, there have been a number of blog posts about having the 2013 PASS Summit in Seattle or elsewhere. I had a post in November about the process and some of the major factors that were on my mind. You can read it here . There is value in moving the Summit to another venue. There is value in having the Summit in the same location/venue year after year as well. Many of the posts that I read recently make excellent arguments for each. As time goes on and you hear another good argument for one...(read more)

    Read the article

  • Why is my FTP output file blank?

    - by Nathan Long
    From the Windows command prompt, I have FTPd to a Windows web server. I can get a file, and I can see a directory listing with dir, but I want to save that list locally. I tried dir > c:\somefile.txt, and the file is created, but it's blank. Same thing if I do ls > c:\somefile.txt. The result is the same when I FTP from a Linux box. FTP sends back the following: 200 PORT command successful 150 Opening ASCII mode data connection for /bin/ls 226 Transfer complete

    Read the article

  • What build tools do not depend on java (or Ruby)?

    - by Mohamed Meligy
    I'm wondering what generic build tools out there include their binary run-times and do not depend on another environment not shipped with them. For example, ANT requires Java, Rake requires Ruby, etc.. would be great if talking about also target-platform-agnostic tools, where I'd just give whatever command for building, whatever command for testing, etc.. and can then define my artifacts in CI or so. Would see something like that useful for building .NET projects (say, on both Windows .NET and Mono), and Node JS projects especially. I do not want to install Java and / or Ruby if what I want is a .NET build or a Node JS build. This is a bit of general awareness question not an exact problem I'm facing, that's why it's here not on StackOverflow. Update: To explain a bit more, what I'm after is the build script that would run MSBuild for compiling for example ( in .NET, and then maybe several Node/NPM commands in Node, etc..), and then have the rest build/test steps, instead of setting these all in MSBuild (again, in .NET case, also, wondering if there is equivalent story in Node).

    Read the article

  • Is it possible to shutdown a remote computer running Windows 7 via Telnet?

    - by Ryan Shripat
    I've successfully connected to my Windows 7 desktop over wifi via Telnet from an XP Home netbook. To login, I use the following command: telnet -l "win7desktop\win7user" win7desktop win7user in this case is an Administrator on win7desktop and is also a member of the Telnet Clients Group. The problem I have is that when I attempt to shut down win7desktop by issuing the following command: shutdown /s ...at the Telnet prompt, I get an Access Denied error: Access is denied.(5) Is it possible to shutdown a remote computer running Windows 7 via Telnet? If so, what do I need to do to get around the Access is denied error?

    Read the article

  • ODI 12c - Getting up and running fast

    - by David Allan
    Here's a quick A-B-C to show you how to quickly get up and running with ODI 12c, from getting the software to creating a repository via wizard or the command line, then installing an agent for running load plans and the like. A. Get the software from OTN and install studio. Check out this viewlet here for quickly doing this. B. Create a repository using the RCU, check out this viewlet here which uses the FMW Repository Creation Utility.  You can also silently create (and drop) a repository using the command line, this is really easy. .\rcu -silent -createRepository -connectString yourhost:1521:orcl.st-users.us.oracle.com -dbUser sys -dbRole sysdba -useSamePasswordForAllSchemaUsers true -schemaPrefix X -component ODI -component IAU  -component IAU_APPEND  -component IAU_VIEWER -component OPSS < passwords.txt where the passwords file contains info such as; sysdba_passwd newschema_passwd odi_user_passwd D workreposname workrepos_passwd  You can find details about the silent use of RCU here in the FMW documentation. C. Quickly create an agent for executing load plans and the like -  there is a great OBE for this, check it out here. If you are on your laptop and just wanting as minimal an agent as possible then this link is a must. With these three steps you are ready to get to the fun stuff! Check out more OBEs here - keep on the lookout for more!

    Read the article

  • Problem after system update. Root permission denied, user lib permission denied.

    - by gregor
    As I updated opensuse 11.1 with update packages from october and november 2009, I couldnt use the command ping. For root it gives Permission denied and for a regular user I get libresolv.so.2: cannot open shared object file: Permission denied. The other coulprit besides the update could be the instalation of google-chrome (.deb file to .rpm, some symbolic links for libs to make chrome work). When the system rebooted, X server also became blank. Before the reboot it worked, as did chrome, but the ping command didnt work even before the reboot. Any ideas? I ran some sort of disk check from a rescue CD, libresolv seems as other libs, root has uid=0 ...

    Read the article

  • The standards that fail us and the intellectual bubble

    - by Jeff
    There has been a great deal of noise in the techie community about standards, and a sudden and unexplainable hate for Flash. This noise isn't coming from consumers... the countless soccer moms, teens and your weird uncle Bob, it's coming from the people who build (or at least claim to build) the stuff those consumers consume. If you could survey the position of consumers on the topic, they'd likely tell you that they just want stuff on the Web to work.The noise goes something like this: Web standards are the correct and right thing to use across the Intertubes, and anything not a part of those standards (Flash) is bad. Furthermore, the more recent noise is centered around the idea that HTML 5, along with Javascript, is the right thing to use. The arguments against Flash are, well, the truth is I haven't seen a good argument. I see anecdotal nonsense about high CPU usage and things I'd never think to check when I'm watching Piano Cat on YouTube, but these aren't arguments to me. Sure, I've seen it crash a browser a few times, but it's totally rare.But let's go back to standards. Yes, standards have played an important role in establishing the ubiquity of the Web. The protocols themselves, TCP/IP and HTTP, have been critical. HTML, which has served us well for a very long time, established an incredible foundation. Javascript did an OK job, and thanks to clever programmers writing great frameworks like JQuery, is becoming more and more useful. CSS is awful (there, I said it, I feel SO much better), and I'll never understand why it's so disconnected and different from anything else. It doesn't help that it's so widely misinterpreted by different browsers. Still, there's no question that standards are a good thing, and they've been good for the Web, consumers and publishers alike.HTML 4 has been with us for more than a decade. In Web years, that might as well be 80. HTML 5, contrary to popular belief, is not a standard, and likely won't be for many years to come. In fact, the Web hasn't really evolved at all in terms of its standards. The tools that generate the standard markup and script have, but at the end of the day, we're still living with standards that are more than ten years old. The "official" standards process has failed us.The Web evolved anyway, and did not wait for standards bodies to decide what to do next. It evolved in part because Macromedia, then Adobe, kept evolving Flash. In the earlier days, it mostly just did obnoxious splash pages, but then it started doing animation, and then rich apps as they added form input. Eventually it found its killer app: video. Now more than 95% of browsers have Flash installed. Consumers are better for it.But I'll do it one better... I'll go out on a limb and say that Flash is a standard. If it's that pervasive, I don't care what you tell me, it's a standard. Just because a company owns it doesn't mean that it's evil or not a standard. And hey, it pains me to say that as a developer, because I think the dev tools are the suck (more on that in a minute). But again, consumers don't care. They don't even pay for Flash. The bottom line is that if I put something Flash based on the Internet, it's likely that my audience will see it.And what about the speed of standards owned by a company? Look no further than Silverlight. Silverlight 2 (which I consider the "real" start to the story) came out about a year and a half ago. Now version 4 is out, and it has come a very long way in its capabilities. If you believe Riastats.com, more than half of browsers have it now. It didn't have to wait for standards bodies and nerds drafting documents, it's out today. At this rate, Silverlight will be on version 6 or 7 by the time HTML 5 is a ratified standard.Back to the noise, one of the things that has continually disappointed me about this profession is the number of people who get stuck in an intellectual bubble, color it with dogmatic principles, and completely ignore the actual marketplace where this stuff all has to live. We aren't machines; Binary thinking that forces us to choose between "open standards" and "proprietary lock-in" (the most loaded b.s. FUD term evar) isn't smart at all. The truth is that the <object> tag has allowed us to build incredible stuff on top of the old standards, and consumers have benefitted greatly. Consumer desire, capitalism, and yes, standards ratified by nerds who think about this stuff for years have all played a role in the broad adoption of the Interwebs.We could all do without the noise. At the end of the day, I'm going to build stuff for the Web that's good for my users, and I'm not going to base my decisions on a techie bubble religion. Imagine what the brilliant minds behind the noise could do for the Web if they joined me in that pursuit.

    Read the article

  • Team Foundation Server – How to pass ReferencePath argument to MSBuild

    - by Gopinath
    When we manually build a .NET project using Visual Studio, the reference paths set in Project Properties are picked up by Visual Studio for referring to dependent DLLs. But the project is built using TFS, the reference path’s specified in project properties are not considered. This is because Reference Paths are user specific settings and they are not stored in .proj files(they are stored in user settings files). The TFS build may break if it does not find the required DLLs in GAC. We can solve the problem by passing ReferencePath parameter to MSBuild in TFS build configurations. Go to Team Explorer Select Build Defintion >> Edit Build Definition Switch to Process tab Navigate to Advanced Section and locate MSBuild Arguments Add the following: /p:ReferencePath=”{File path}”

    Read the article

  • Xen and HyperVM build question on os template

    - by Levi De Haan
    I recently built a server with hypervm and xen, now i know xen from command line, but hypervm ties into our whmcs and so its a requirement, however my question is this, when i build a new o/s template my partition table is gone, and i know why, but i was wondering if anyone has built anything in hypervm for adding in partition tables, so i dont have to reinvent the wheel :). i can do it command line in the created vm with fdisk, and i have tracked down the creation scripts for hypervm but i am unsure if these insert directly into the machine as it looks like a lot of the things it does are externalized and are for xen to assign things like ip address etc.. oh and on an aside when i go in to modify the .cnf file to change the boot disk from cdrom to drive on windows when i boot using hypervm it overwrites my setting again..frustrating as heck, i've been trying to track down where in the code it does this, has anyone else had this problem and if so how did you fix it if you did?

    Read the article

  • Xen and HyperVM build question on os template

    - by Levi De Haan
    I recently built a server with hypervm and xen, now i know xen from command line, but hypervm ties into our whmcs and so its a requirement, however my question is this, when i build a new o/s template my partition table is gone, and i know why, but i was wondering if anyone has built anything in hypervm for adding in partition tables, so i dont have to reinvent the wheel :). i can do it command line in the created vm with fdisk, and i have tracked down the creation scripts for hypervm but i am unsure if these insert directly into the machine as it looks like a lot of the things it does are externalized and are for xen to assign things like ip address etc.. oh and on an aside when i go in to modify the .cnf file to change the boot disk from cdrom to drive on windows when i boot using hypervm it overwrites my setting again..frustrating as heck, i've been trying to track down where in the code it does this, has anyone else had this problem and if so how did you fix it if you did?

    Read the article

  • Redmine: reposman.rb succeeds, but does not make SVN repos available to projects

    - by Joey Adams
    I'm testing reposman.rb on the command-line (before I make it a cron job): /usr/sbin/reposman.rb --svn-dir=/var/svn \ --redmine-host=http://example.com/projects --key='redacted' \ --owner='nobody' --group='nobody' It succeeded, printing messages for projects that didn't have repos yet: repository /var/svn/project1 created repository /var/svn/project2 created And printed nothing after running the same command again, indicating it remembered the repos. However, if I look at the Repository settings in Redmine for project1 and project2, they aren't set. Although the SVN repo is created, the Redmine projects aren't configured. How do I get reposman.rb to automatically configure Redmine projects to use the repos after they're set up?

    Read the article

  • Are there examples of non CRUD approaches?

    - by Pieter B
    I'm a programmer but also have worked as an archivist. As archivist it's a lot about keeping data. I often get into arguments with colleagues when it comes to operations on data. I don't like the U and the D in CRUD too much. Rather then update a record I prefer to add a new one and have a reference to the old record. That way you build a history of changes. I also don't like deleting records but rather mark them as inactive. Is there a term for this? Basically only creating and reading data? Are there examples of this approach?

    Read the article

  • What are the downsides to using dependency injection?

    - by kerry
    I recently came across an interesting question on stack overflow with some interesting reponses.  I like this post for three reasons. First, I am a big fan of dependency injection, it forces you to decouple your code, create cohesive interfaces, and should result in testable classes. Second, the author took the approach I usually do when trying to evaluate a technique or technology; suspend personal feelings and try to find some compelling arguments against it. Third, it proved that it is very difficult to come up with a compelling argument against dependency injection. What are the downsides to using dependency injection?

    Read the article

  • Installing a new SQL Server instance fails

    - by Rubio
    I've previously in my setup installed SQL Server Express 2005. Now I've switched to SQL Server Express 2008. I updated the command line parameters to those documented for the latter. If the comp already has SQL Server Express 2008 installed, my installer should create a new instance. The command line parameters are as follows: /ACTION=Install /FEATURES=SQLEngine /QS /INSTANCENAME=ABCD /SECURITYMODE=SQL /SAPWD=CunningPassword The requested instance name does not exist on the target machine. This will end in an error -2068643838. The logs show the following error: "No features were installed during the setup execution. The requested features may already be installed." If I remove the /QS parameter and try to install interactively, I'll get as far as the Feature Selection page. The UI shows three options, Instance Features, Shared Features and Redistributable Features. Whatever I select, clicking Next results in the same error (There are validation errors on this page). Any ideas anyone? Thanks, -- Rubio

    Read the article

  • AIX network parameters to close TCP sockets of unplugged devices

    - by ADD Geek
    Hi there We have an AIX box, running what we call in banking "ATM Switch" not the ATM networking switch, but the bank ATM driver. where we have some ATM machines connected to two server processes. The problem is, when we disconnect any of these machines, the netstat -na| grep <port number> command shows that the socket established for this disconnected device is still established, we have to manually send a command from the software to make the socket aware that it is not live anymore. Is there a parameter on tcp level to make this connection aware within a minute or two that this device is not connected anymore? we had the following parameters set with root privileges: no -o tcp_keepidle=1000 no -o tcp_keepcnt=2 no -o tcp_keepintvl=150 no -o tcp_finwait2=100 it was originally having the default values. but even after we changed these parameters and restarted the server processes, the problem was still there.

    Read the article

< Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >