Search Results

Search found 14610 results on 585 pages for 'william tell'.

Page 283/585 | < Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >

  • wireless is disabled by hardware lenovo 3000g430

    - by sudheer
    sir i have problem with my wifi switch sir please tell me solution for my problem (wifi is disabled by hardware). output of sudo lshw -C network is sudo] password for sudheer: *-network DISABLED description: Wireless interface product: BCM4312 802.11b/g LP-PHY vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:06:00.0 logical name: eth2 version: 01 serial: 00:21:00:72:3a:93 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=5.100.82.38 latency=0 multicast=yes wireless=IEEE 802.11bg resources: irq:19 memory:f4700000-f4703fff *-network description: Ethernet interface product: NetLink BCM5906M Fast Ethernet PCI Express vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:07:00.0 logical name: eth0 version: 02 serial: 00:1e:68:ad:24:0b size: 100Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.121 duplex=full firmware=sb v3.04 ip=172.16.52.79 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:47 memory:f4600000-f460ffff output of iwconfig is lo no wireless extensions. eth2 IEEE 802.11 Access Point: Not-Associated Link Quality:5 Signal level:0 Noise level:0 Rx invalid nwid:0 invalid crypt:0 invalid misc:0 eth0 no wireless extensions. sudheer@sudheer:~$ sudo iwlistscanning sudo: iwlistscanning: command not found ***sudheer@sudheer:~$ sudo iwlist scanning*** lo Interface doesn't support scanning. eth2 Failed to read scan data : Invalid argument eth0 Interface doesn't support scanning.

    Read the article

  • OS X Automator empty, blank or null value.

    - by Brian
    I have some data files mostly excel, word and pdf files most of the files have no extension on them. So they are missing the .doc .xls. This data needs to be used in a Windows environment now. I have created automator apps for each of the file types I want to add the ext onto. The problem is it also adds the extension to files that already have an extension. So data.xls becomes data.xls.xls I would like to figure a way to only add the extenion to the files without extension. How do I tell the finder filter that i only want it to return files without extensions. I see how to add a line to filter by extension but I don't know how to let it know I want only blank or null or files without any extensions. Thanks

    Read the article

  • Trying to install Proprietory Nvidia Graphics Drivers

    - by Peter Snow
    After reading and trying many different suggestions for some hours, I returned to this how-to: https://help.ubuntu.com/community/BinaryDriverHowto/Nvidia The first problem I encounter is how to identify which of the listed drivers support my Nvidia GEForce 630M graphics card. Following the links doesn't really help, since it is not stated there either (except where support for a new driver was added later which is explicitly stated, but the original devices covered are not). However, even if I knew, if it doesn't appear in the 'Additional Drivers' dialogue (see below), how will I install it? Second Issue: The article goes on to say that available drivers for my hardware are usually listed in 'Additional Drivers'. In my case, they aren't. Unfortunately, it doesn't tell me how to correct that or work around it? I've checked the bios and there is no way offered there to disable the integrated graphics, only the Nvidia graphics. I've also tried each available option in this: $ sudo update-alternatives --config i386-linux-gnu_gl_conf My system is an Acer Aspire 4752G bought May 2012. I'm running Ubuntu 12.04LTS. uname -a : 3.2.0-38-generic-pae #61-Ubuntu SMP Tue Feb 19 12:39:51 UTC 2013 i686 i686 i386 GNU/Linux It's 64bit hardware but I installed 32bit OS for greater software compatibility. Running $ sudo tail -fn 500 /var/log/Xorg.0.log | grep '(EE)' returns" (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 28.886] (EE) Failed to initialize GLX extension (Compatible NVIDIA X driver not found) The reason for wanting the proprietor y drivers is because my laptop comes with 3D accelerated graphics adaptor and so rather than confining myself to struggling with the on-board graphics, I would rather use it. I also want to experiment with using it for bitmining (which uses the GPU's for computing power).

    Read the article

  • Are short abbreviated method/function names that don't use full words bad practice or a matter of style?

    - by Alb
    Is there nowadays any case for brevity over clarity with method names? Tonight I came across the Python method repr() which seems like a bad name for a method to me. It's not an English word. It apparently is an abbreviation of 'representation' and even if you can deduce that, it still doesn't tell you what the method does. A good method name is subjective to a certain degree, but I had assumed that modern best practices agreed that names should be at least full words and descriptive enough to reveal enough about the method that you would easily find one when looking for it. Method names made from words help let your code read like English. repr() seems to have no advantages as a name other than being short and IDE auto-complete makes this a non-issue. An additional reason given in an answer is that python names are brief so that you can do many things on one line. Surely the better way is to just extract the many things to their own function, and repeat until lines are not too long. Are these just a hangover from the unix way of doing things? Commands with names like ls, rm, ps and du (if you could call those names) were hard to find and hard to remember. I know that the everyday usage of commands such as these is different than methods in code so the matter of whether those are bad names is a different matter.

    Read the article

  • "The protocol 'net.msmq' is not supported."

    - by Randolpho St. John
    OMG, a new lesson! Will wonders never cease? So I ran into an interesting issue setting up a WCF service to consume an MSMQ queue. I won't bother you with the details of how to actually build a WCF/MSMQ service; there are plenty of tutorials on the subject. I want to share with you an interesting error that I ran into and the surprisingly simple fix. The error occurs when attempting to generate a Service Reference or even simply browsing to the WSDL of your WCF/MSMQ service in the form of a YSOD with the following error: "The protocol 'net.msmq' is not supported." After a lot of Googling on the subject turning up plenty of questions with the same error but no answers. So I went digging into some application level config files on a server that already had a WCF/MSMQ service successfully set up by the network admin, and the answer was amazingly simple: If you are hosting an MSMQ/WCF service in IIS, you have to tell IIS to allow net.msmq protocol. It's in the advanced settings for the application or site in which you are hosting the service. .... aaaand, that's it.

    Read the article

  • Deactivating website in ISPConfig shows another site

    - by Mattias
    A long time ago, one of our clients setup a subdomain pointing to our ip-adress. We added a website (SitesWebsiteAdd new website) that points to one of our servers. The project is now closed and the client wants us to remove the content. When we deactivate (by unclicking active) this site it automatically defaults to another website we have in our list (!?). So, because the client is still pointing to our ip, when entering project.client.com another client project is showing up by default. How is this possible? Any suggestions? I can ofcourse give you guys more details when you tell me what details you need. Thanks

    Read the article

  • With Google Analytics, is it possible to check a specific page in Multi-Channel conversion attribution?

    - by Emmett R.
    I'm somewhat new to Google Analytics, and I'm trying to track all conversions that are assisted by a particular landing page, because I don't expect an instant purchase. I have e-commerce tracking set up. Due to the constraints of the associated ad campaign, I can't include the source/medium code in the url when people go to the landing page, and all of my traffic to the landing page is likely to be direct, so I'm not sure how to tell Multi-Channel marketing that it's a significant page. I know how to add events to a page, but I'm still figuring out what they can and cannot do. Would creating a redirect from the landing url to an identical url+source/medium code work? Any advice on how to accomplish this would be greatly appreciated. Tracking the final sale conversion is not the issue. Ecommerce reporting is functioning just fine on the site. I just want to report the landing page as an assist, whenever it shows up in the funnel, and I need to be able to do that across multiple visits.

    Read the article

  • ARR servers in the Load Balancing pool automatically go from unavailable to available

    - by Chris
    I have 3 IIS web servers in an ARR web farm. When we do rolling releases, we take one server offline as a backup server and move it into an "Unavailable State" I have noticed that with ARR, servers will not stay in this state...they come back online automatically hours or days later. Does anyone know how to remedy this situation? This is very bad as the server that is down is typically not running the correct version of our code. I need to keep a server unavailable until i tell it otherwise.

    Read the article

  • use subdomain on different host

    - by Roy
    I want to accomplish something that I thought was simple. My wish is as follows: I have a domainname with hosting, a WordPress multisite (with subfolder setup) installed and running: gangleri.nl. I have another domain at another host and without hosting: monas.nl I created a subdomain on gangleri.nl: monas.gangleri.nl and the domain redirects to that subdomain. Now what I want is to have monas.nl act like a website, not a website in a subdomain. I would like to have post urls as in monas.nl/posttitle. I first thought to do this with the DNS settings of Monas.nl. I now have an URL forward, CURL is not what I want and I did not manage to get A-records or CNAMEs to work. I tried using the htaccess file of the WP installation in monas.gangleri.nl. I tried 301, rewrite and whatnot, but also without success. Meanwhile, I have been reading so much that I no longer have a clue what to do. A-record doesn't sound probable, since I have no IP for the subdomain, so an A-record would point to gangleri.nl rather than using the subdomain. Also I have no idea if I should do something in the DNS settings of gangleri.nl or monas.nl, both, one of them and something somewhere else. I have the idea that I've tried everything, but the more I try and read about it, the less I can get my head around. People talking about A-records to subdomains while I can only use IPs, CNAME settings that my host doesn't support or something. Could somebody tell me if what I want is possible and if so, take me by the hand and guide me through it?

    Read the article

  • Trying to compile from source newest apache with newest openssl

    - by AlexMA
    I need to install apache 2.4.10 using openssl 1.0.1i. I compiled openssl from source with: $ ./config \ --prefix=/opt/openssl-1.0.1i \ --openssldir=/opt/openssl-1.0.1i $ make $ sudo make install and Apache with: ./configure --prefix=/etc/apache2 \ --enable-access_compat=shared \ --enable-actions=shared \ --enable-alias=shared \ --enable-allowmethods=shared \ --enable-auth_basic=shared \ --enable-authn_core=shared \ --enable-authn_file=shared \ --enable-authz_core=shared \ --enable-authz_groupfile=shared \ --enable-authz_host=shared \ --enable-authz_user=shared \ --enable-autoindex=shared \ --enable-dir=shared \ --enable-env=shared \ --enable-headers=shared \ --enable-include=shared \ --enable-log_config=shared \ --enable-mime=shared \ --enable-negotiation=shared \ --enable-proxy=shared \ --enable-proxy_http=shared \ --enable-rewrite=shared \ --enable-setenvif=shared \ --enable-ssl=shared \ --enable-unixd=shared \ --enable-ssl \ --with-ssl=/opt/openssl-1.0.1i \ --enable-ssl-staticlib-deps \ --enable-mods-static=ssl make (would run sudo make install next but I get an error) I'm essentially following the guide here except with newer slightly newer versions. My problem is I get a linker error when I run make for apache: Making all in support make[1]: Entering directory `/home/developer/downloads/httpd-2.4.10/support' make[2]: Entering directory `/home/developer/downloads/httpd-2.4.10/support' /usr/share/apr-1.0/build/libtool --silent --mode=link x86_64-linux-gnu-gcc -std=gnu99 -pthread -L/opt/openssl-1.0.1i/lib -lssl -lcrypto \ -o ab ab.lo /usr/lib/x86_64-linux-gnu/libaprutil-1.la /usr/lib/x86_64-linux-gnu/libapr-1.la -lm /usr/bin/ld: /opt/openssl-1.0.1i/lib/libcrypto.a(dso_dlfcn.o): undefined reference to symbol 'dlclose@@GLIBC_2.2.5' I tried the answer here, but no luck. I would prefer to just use aptitude, but unfortunately the versions I need aren't available yet. If anyone knows how to fix the linker problem (or what I think is a linker problem), or knows of a better way to tell apache to use a newer openssl, it would be greatly appreciated; I've got apache 1.0.1i working otherwise.

    Read the article

  • script to automatically test if a web site is available

    - by Xoundboy
    I'm a lone web developer with my own Centos VPS hosting a few small web sites for my clients. Today I discovered my httpd service had stopped (for no apparent reason - but that's another thread). I restarted it but now I need to find a way that I can be notified by email and/or SMS if it happens again - I don't like it when my client rings me to tell me their web site doesn't work! I know there are probably many different possibilities, including server monitoring software. I think all I really need is a script that I can run as a cron job from my dev host (which is permanently running in my office) that attempts to load a page from my production server and if it doesn't load within say 30 seconds then it sends me an email or SMS. I'm pretty rubbish at shell scripting, hence this question. Any suggestions would be gratefully appreciated, thanks to all you clever sysadmin guys and girls out there :)

    Read the article

  • Impersonation on IIS 7.0 passes the machine credentials for Crystal Reports

    - by pknox
    On a 32-bit Windows 2008 server running the Donor2 Application in the Classic .NET Managed Pipeline mode, configured for Windows Integrated Authentication and Impersonation, all of the .NET pages are passing the authenticated user’s credentials [DomainName\UserName]. This is the correct, expected behavior. The Crystal Reports pages, instead of passing the authenticated user’s credentials, are passing the IIS Server’s credentials [DomainName\MachineName$]. One of the very frustrating aspects of this situation is that I have another server which, as far as I can tell, is configured identically. That server, when loading Crystal Reports, is passing the authenticated user’s credentials [DomainName\UserName] as expected. I have obviously missed something, but I have no idea what it could be.

    Read the article

  • Simple dig output?

    - by knocte
    In a script I want to be able to write an IP address to somewhere easily, so I thought using dig (or a similar command) with back-ticks. However the simplest output I've been able to come up to wrt dig parameters is > dig -t A +noall +answer www.google.com www.google.com. 300 IN A 173.194.66.106 www.google.com. 300 IN A 173.194.66.104 Any way (extra arg, different tool instead of dig?) to get rid of the junk apart from the IP address?? (And please don't tell me to use sed.) Thanks

    Read the article

  • Preferred lambda syntax?

    - by Roger Alsing
    I'm playing around a bit with my own C like DSL grammar and would like some oppinions. I've reserved the use of "(...)" for invocations. eg: foo(1,2); My grammar supports "trailing closures" , pretty much like Ruby's blocks that can be passed as the last argument of an invocation. Currently my grammar support trailing closures like this: foo(1,2) { //parameterless closure passed as the last argument to foo } or foo(1,2) [x] { //closure with one argument (x) passed as the last argument to foo print (x); } The reason why I use [args] instead of (args) is that (args) is ambigious: foo(1,2) (x) { } There is no way in this case to tell if foo expects 3 arguments (int,int,closure(x)) or if foo expects 2 arguments and returns a closure with one argument(int,int) - closure(x) So thats pretty much the reason why I use [] as for now. I could change this to something like: foo(1,2) : (x) { } or foo(1,2) (x) -> { } So the actual question is, what do you think looks best? [...] is somewhat wrist unfriendly. let x = [a,b] { } Ideas?

    Read the article

  • How can i access windows XP remote desktop on private IP from internet?

    - by Jennie
    So the machine is behind a DSL router on a private IP so that it can not receive inbound requests. I want to know: Is there anyway to setup the router NAT (i highly doubt it supports one to one port mapping) without disturbing other users on the same router. I have another machine on internet which has public IP on it without any firewall. Can i use this machine as a relay server so that to initiate the connection, the XP machine send an outbound request and this relay server makes my connection through and then i can access my machine on pvt ip without any problem. Please tell??

    Read the article

  • VMWare tools on Ubuntu Server 10.10 kernel source problem

    - by Hamid Elaosta
    After install and running the vm-ware config, the config needs my kernel headers to compile some modules, ok, so I'll give it them, but it just won't work. It asks for the path of the directory of C header files that match my running kernel. If I uname -r I get 2.6.35-22-generic-pae So I tell it the source path is /lib/modules/2.6.25-22-generic-pae/build/include and it returns "The directory of kernel headers (version @@VMWARE@@ UTS_RELEASE) does not match your running kernel (version 2.6.35-22-generic-pae). ..I'm confused? can anyone offer suggestions please? I installed hte kernel source andh eaders myself using sudo apt-get install linux-headers-$(uname -r)

    Read the article

  • Visual Studio 2012 first impressions...no Macros!

    - by bconlon
    Yesterday I installed Microsoft Visual Studio 2012 for the first time (all 8.5GB) and after 20 years of (mostly) happy times using VS they have removed Macros, one of the most handy features.The first thing I wanted to do when I upgraded my VS2010 project was to add a #elseif block to each file. This would usually be simple case of find in files of the previous #elseif and then Ctrl+Shift+R to record a macro which would be: F8 (to select the next file from find list), F3 (to find the correct position in file), Ctrl+V to paste the new code. Then all I would need to do is keep Ctrl+Shift+P (Play Macro) pressed until all the files were processed.But alas Ctrl+Shift+R does nothing! I won’t say that I use Macros every day but it was a very useful feature.To continue my moaning a little more, I also don't like the bland interface. This has been well documented by others, but now I have used it myself, I find it difficult to tell one grey area of screen from another and the lack of colour makes the icons unclear.I also don't see why the menus now need to SHOUT in capital letters?On the plus side, they have now added the ability to see WPF properties in the debugger...a bit of an oversight in Visual Studio 2010. Oh, but you still can't edit and continue on files that contain templated code.Whilst Visual Studio 2012 is not a complete disaster like Windows 8 (why develop a desk top OS to be the same as a Smart device OS), it does not float my boat.Rant over.#

    Read the article

  • Asterisk failing at startup after upgrading to asterisk18

    - by Supratik
    I was using asterisk16 and asterisk16-skypeforasterisk, which was working fine. I have recently upgraded to asterisk18 and asterisk18-skypeforasterisk, after that I am receiving the following error message. Asterisk ended with exit status 1 Asterisk died with code 1. Asterisk could not start! Use 'tail /var/log/asterisk/full' to find out why. When I checked the log I got the following messages. codec_g729a.c: == Found total of 11 G.729 licenses translate.c: empty buf size, you need to supply one Now, if I remove the /var/lib/asterisk/licenses folder it works fine. Can you please tell me what could be the issue here ? Warm Regards Supratik

    Read the article

  • My laptop was stolen. What do I need to do?

    - by chris
    My laptop was recently stolen. It was a corporate system running XP, which means it was part of a domain - I'm assuming that makes it impossible for someone to log into it, although I know there are ways to reset the local admin account. Is there any way to tell if someone boots it up? I was logged into gmail, using two factor authentication. I will change my password, but is there any chance of tracking any attempted accesses? Other than changing passwords on all my web accounts, is there anything else I need to do?

    Read the article

  • this.BoundingBox.Intersects(Wall[0].BoundingBox) not working properly

    - by Pieter
    I seem to be having this problem a lot, I'm still learning XNA / C# and well, trying to make a classic paddle and ball game. The problem I run into (and after debugging have no answer) is that everytime I run my game and press either of the movement keys, the Paddle won't move. Debugging shows that it never gets to the movement part, but I can't understand why not? Here's my code: // This is the If statement for checking Left movement. if (keyboardState.IsKeyDown(Keys.Left) || keyboardState.IsKeyDown(Keys.A)) { if (!CheckCollision(walls[0])) { Location.X -= Velocity; } } //This is the CheckCollision(Wall wall) boolean public bool CheckCollision(Wall wall) { if (this.BoundingBox.Intersects(wall.BoundingBox)) { return true; } return false; } As far as I can tell there should be absolutely no problem with this, I initialize the bounding box in the constructor whenever a new instance of Walls and Paddle is created. this.BoundingBox = new Rectangle(0, 0, Sprite.Width, Sprite.Height); Any idea as to why this isn't working? I have previously succeeded with using the whole Location.X < Wall.Location.X + Wall.Texture.Width code... But to me that seems like too much coding if a simple boolean check could be done.

    Read the article

  • WSS "Cannot connect to the configuration database"

    - by Tim
    I have 64-bit WSS 3.0 installed on a 64-bit Windows 2003 Server. After installing WSS 3.0 I switched IIS to run in 32-bit emulation mode as we have some applications that require this. I'm getting a "Cannot connect to the configuration database" trying to get to the Central Admin page and wondered if: a. The setup I have won't work and I'm wasting my time trying to figure this out. or b. If anyone has any suggestions for resolving the database connection issue? The identity of the app pool that WSS runs under has all the required permissions in SQL so far as I can tell. Any help would be appreciated!

    Read the article

  • IRQ Conflicts Causing Video Card and Boot Problems?

    - by sanpatricio
    tl;dr - I have 4 devices sharing 1 IRQ. Is this bad and how do I tell the BIOS to stop it? Background: I have an old Dell GX280 dual Pentium 4 that I (semi) resurrected last weekend with an installation of Ubuntu 12.04. Everything was going fine the first several hours until a problem that plagued me when WinXP was on that machine happened -- it froze. Completely froze. None of the myriad of ways I have found here on askubuntu helped me to regain control except a long-press of the power button to shut it off. Clearly, this wasn't a software/WinXP issue. After much googling, I found that hardware conflicts can often cause this sort of total lock-up and with all the odd blocks of yellow and flecks of color showing on my screen (both WinXP and Ubuntu) I figured my old GeForce 7600 was failing and causing me these odd issues. (A good canned-air dusting of the entire interior fixed the color fleck problem) Again, through much googling and numerous answers found on askubuntu, I somehow stumbled my way onto the lshw command. After going through it, line by line, I found that I have four devices sharing IRQ 16: eth0, wlan0, ide0 (DVD-RW), and my video card. In hindsight, I can recall weird instances of my Ethernet connection to another computer not working when I thought it should. I never full troubleshot those issues so it could be a coincidence. The other thing that has been plaguing me since installing Ubuntu (wasn't there during WinXP) has been periodic moments of my monitor getting no signal from Ubuntu during boot. The first couple days, it would disappear after the Dell boot screen and reappear at Ubuntu login. Now, it disappears after the Dell boot screen and doesn't return at all -- I have to hit F12 where I can load a safe mode version of Ubuntu and get more details like dmesg and lsdev. I also ran memtest86 overnight and woke up to zero errors, so failing RAM is out. Where do I go from here?

    Read the article

  • Update the remote of a git branch after name changing

    - by Dror
    Consider the following situation. A remote repository has two branches master and b1. In addition it has two clones repo1 and repo2 and both have b1 checked out. At some point, in repo1 the name of b1 was changed. As far as I can tell, the following is the right procedure to change the name of b1: $ git branch b1 b2 # changes the name of b1 to b2 $ git push remote :b1 # delete b1 remotely $ git push --set-upstream origin b2 # create b2 remotely and direct the local branch to track the remote 1 Now, afterwards, in repo2 I face a problem. git pull doesn't pull the changes from the branch (which is now called remotely b2). The error returned is: Your configuration specifies to merge with the ref 'b1' from the remote, but no such ref was fetched. What is the right way to do this? Both the renaming part and the updating in other clones?

    Read the article

  • Configure GNU screen so that it stores command histories in files

    - by user65950
    I would like to configure GNU screen such that it stores the command histories of all the different windows in different files. I know by default GNU screen does not store the command histories (of its different windows) in a file at all (it stores them in memory instead), but it might be possible to tell it to store them in files instead? The different command history files should have the names <session>.<window>.history, or similar. Does anyone have an idea how to do that? (Just to be clear, I want each GNU screen window to write a different file. I like that each window has a different history, and I typically run different types of commands in the different windows.)

    Read the article

  • Laptop Locking Up

    - by David
    I am having a very weird issue on a Lenovo W510 laptop. It will lock up randomly. I have had it lock up during post, during the boot-up of Linux, during login, and after the login. The following are tests that I have performed on the laptop. I ran memtest I took out the extra memory module. I swapped the HDD with another HDD that had Windows 7 on it. (It BSOD'd, and before anyone could possibly read the error line, it restarts.) I tried taking the battery out and booting with only the Power cord. The only other options I can think of the problem being are the motherboard or the PSU. If anyone has any advice, I appreciate it. If not, the HP guy will be here in a few days to fix it. I would just love to call them up and tell them that the service is no longer needed.

    Read the article

< Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >