Search Results

Search found 40031 results on 1602 pages for 'command message'.

Page 442/1602 | < Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >

  • How to Find Out Which Devices Are Supported By Solaris 11

    - by rickramsey
    Image of monks gathering on the steps of the main hall in the Tashilhunpo Monastery is courtesy of Alison Whitear Travel Photography. In his update of Brian Leonard's original Taking Your First Steps With Oracle Solaris, Glynn Foster walks you through the most basic steps required to get a version of Oracle Solaris 11 operational: Installing Solaris (VirtualBox, bare metal, or multi-boot) Managing users (root role, sudo command) Managing services with SMF (svcs and svcadm) Connecting to the network (with SMF or manually via dladm and ipadm) Figuring out the directory structure Updating software (with the IPS GUI or the pkg command) Managing package repositories Creating and managing additional boot environments One of the things you'll have to consider as you install Solaris 11 on an x86 system is whether Solaris has the proper drivers for the devices on your system. In the section titled "Installing On Bare Metal as a Standalone System," Glynn shows you how to use the Device Driver utility that's included with the Graphical Installer. However, if you want to get that information before you start installing Solaris 11 on your x86 system, you can consult the x86 Device List that's part of the Oracle Solaris Hardware Compatibility List (HCL). Here's how: Open the Device List. Scroll down to the table. Open the "Select Release" pull-down menu and pick "Solaris 11 11/11." Move over to the "Select Device Type" pull-down menu, and pick the device type. Or "All." The table will list all the devices of that type that are supported by Solaris 11, including PCI ID and vendor. In the coming days the Solaris Hardware Compatibility List will be updated with more Solaris 11 content. Stay tuned. - Rick Ramsey Website Newsletter Facebook Twitter

    Read the article

  • Which browser does my computer use to open a Web page? [on hold]

    - by msh210
    I know little about networking the Internet, but, from what I understand, it works — very approximately — as follows: I, sitting at the computer example.com, send a message saying, roughly, "get http://s.tk" to my ISP, which passes the message along, eventually to the machine at s.tk. The s.tk machine gets "example.comhas sent 'gethttp://s.tk'", so sendssomefileto its ISP which passes the file along, eventually to the machine atexample.com`. When the file gets back to example.com, my computer, how does my computer know what to do with it? I'm sure the headers (or something else) indicate it's a Web page rather than, say, a Usenet post — that's not my question. My question is: how does it know whether to display the Web page in my open Opera window or my open Firefox window, or my other open Firefox window, or, heck, to open a new browser instance?

    Read the article

  • Conky - How to get weather alerts using rss?

    - by BeSlayed
    Quick question about how to get weather alerts with conky. I'm using the following bit of code: ${execpi 3600 ~/.conky/scripts/conky-rss.sh "http://www.weather.gov/alerts/wwarssget.php?zone=MDZ006"|sed '1,3d'|fold -sw 62} This (correctly) display a message to the effect "There are no weather alerts for ...." when there are no weather alerts. However, when there is an alert, it simply displays a message like "Short-term forecast for ..." with no further information. Any suggestions? How are other people getting weather alert info in their conkys?

    Read the article

  • WSDL-world vs CLR-world – some differences

    - by nmarun
    A change in mindset is required when switching between a typical CLR application and a web service application. There are some things in a CLR environment that just don’t add-up in a WSDL arena (and vice-versa). I’m listing some of them here. When I say WSDL-world, I’m mostly talking with respect to a WCF Service and / or a Web Service. No (direct) Method Overloading: You definitely can have overloaded methods in a, say, Console application, but when it comes to a WCF / Web Services application, you need to adorn these overloaded methods with a special attribute so the service knows which specific method to invoke. When you’re working with WCF, use the Name property of the OperationContract attribute to provide unique names. 1: [OperationContract(Name = "AddInt")] 2: int Add(int arg1, int arg2); 3:  4: [OperationContract(Name = "AddDouble")] 5: double Add(double arg1, double arg2); By default, the proxy generates the code for this as: 1: [System.ServiceModel.OperationContractAttribute( 2: Action="http://tempuri.org/ILearnWcfService/AddInt", 3: ReplyAction="http://tempuri.org/ILearnWcfService/AddIntResponse")] 4: int AddInt(int arg1, int arg2); 5: 6: [System.ServiceModel.OperationContractAttribute( 7: Action="http://tempuri.org/ILearnWcfServiceExtend/AddDouble", 8: ReplyAction="http://tempuri.org/ILearnWcfServiceExtend/AddDoubleResponse")] 9: double AddDouble(double arg1, double arg2); With Web Services though the story is slightly different. Even after setting the MessageName property of the WebMethod attribute, the proxy does not change the name of the method, but only the underlying soap message changes. 1: [WebMethod] 2: public string HelloGalaxy() 3: { 4: return "Hello Milky Way!"; 5: } 6:  7: [WebMethod(MessageName = "HelloAnyGalaxy")] 8: public string HelloGalaxy(string galaxyName) 9: { 10: return string.Format("Hello {0}!", galaxyName); 11: } The one thing you need to remember is to set the WebServiceBinding accordingly. 1: [WebServiceBinding(ConformsTo = WsiProfiles.None)] The proxy is: 1: [System.Web.Services.Protocols.SoapDocumentMethodAttribute("http://tempuri.org/HelloGalaxy", 2: RequestNamespace="http://tempuri.org/", 3: ResponseNamespace="http://tempuri.org/", 4: Use=System.Web.Services.Description.SoapBindingUse.Literal, 5: ParameterStyle=System.Web.Services.Protocols.SoapParameterStyle.Wrapped)] 6: public string HelloGalaxy() 7:  8: [System.Web.Services.WebMethodAttribute(MessageName="HelloGalaxy1")] 9: [System.Web.Services.Protocols.SoapDocumentMethodAttribute("http://tempuri.org/HelloAnyGalaxy", 10: RequestElementName="HelloAnyGalaxy", 11: RequestNamespace="http://tempuri.org/", 12: ResponseElementName="HelloAnyGalaxyResponse", 13: ResponseNamespace="http://tempuri.org/", 14: Use=System.Web.Services.Description.SoapBindingUse.Literal, 15: ParameterStyle=System.Web.Services.Protocols.SoapParameterStyle.Wrapped)] 16: [return: System.Xml.Serialization.XmlElementAttribute("HelloAnyGalaxyResult")] 17: public string HelloGalaxy(string galaxyName) 18:  You see the calling method name is the same in the proxy, however the soap message that gets generated is different. Using interchangeable data types: See details on this here. Type visibility: In a CLR-based application, if you mark a field as private, well we all know, it’s ‘private’. Coming to a WSDL side of things, in a Web Service, private fields and web methods will not get generated in the proxy. In WCF however, all your operation contracts will be public as they get implemented from an interface. Even in case your ServiceContract interface is declared internal/private, you will see it as a public interface in the proxy. This is because type visibility is a CLR concept and has no bearing on WCF. Also if a private field has the [DataMember] attribute in a data contract, it will get emitted in the proxy class as a public property for the very same reason. 1: [DataContract] 2: public struct Person 3: { 4: [DataMember] 5: private int _x; 6:  7: [DataMember] 8: public int Id { get; set; } 9:  10: [DataMember] 11: public string FirstName { get; set; } 12:  13: [DataMember] 14: public string Header { get; set; } 15: } 16: } See the ‘_x’ field is a private member with the [DataMember] attribute, but the proxy class shows as below: 1: [System.Runtime.Serialization.DataMemberAttribute()] 2: public int _x { 3: get { 4: return this._xField; 5: } 6: set { 7: if ((this._xField.Equals(value) != true)) { 8: this._xField = value; 9: this.RaisePropertyChanged("_x"); 10: } 11: } 12: } Passing derived types to web methods / operation contracts: Once again, in a CLR application, I can have a derived class be passed as a parameter where a base class is expected. I have the following set up for my WCF service. 1: [DataContract] 2: public class Employee 3: { 4: [DataMember(Name = "Id")] 5: public int EmployeeId { get; set; } 6:  7: [DataMember(Name="FirstName")] 8: public string FName { get; set; } 9:  10: [DataMember] 11: public string Header { get; set; } 12: } 13:  14: [DataContract] 15: public class Manager : Employee 16: { 17: [DataMember] 18: private int _x; 19: } 20:  21: // service contract 22: [OperationContract] 23: Manager SaveManager(Employee employee); 24:  25: // in my calling code 26: Manager manager = new Manager {_x = 1, FirstName = "abc"}; 27: manager = LearnWcfServiceClient.SaveManager(manager); The above will throw an exception saying: In short, this is saying, that a Manager type was found where an Employee type was expected! Hierarchy flattening of interfaces in WCF: See details on this here. In CLR world, you’ll see the entire hierarchy as is. That’s another difference. Using ref parameters: * can use ref for parameters, but operation contract should not be one-way (gives an error when you do an update service reference)   => bad programming; create a return object that is composed of everything you need! This one kind of stumped me. Not sure why I tried this, but you can pass parameters prefixed with ref keyword* (* terms and conditions apply). The main issue is this, how would we know the changes that were made to a ‘ref’ input parameter are returned back from the service and updated to the local variable? Turns out both Web Services and WCF make this tracking happen by passing the input parameter in the response soap. This way when the deserializer does its magic, it maps all the elements of the response xml thereby updating our local variable. Here’s what I’m talking about. 1: [WebMethod(MessageName = "HelloAnyGalaxy")] 2: public string HelloGalaxy(ref string galaxyName) 3: { 4: string output = string.Format("Hello {0}", galaxyName); 5: if (galaxyName == "Andromeda") 6: { 7: galaxyName = string.Format("{0} (2.5 million light-years away)", galaxyName); 8: } 9: return output; 10: } This is how the request and response look like in soapUI. As I said above, the behavior is quite similar for WCF as well. But the catch comes when you have a one-way web methods / operation contracts. If you have an operation contract whose return type is void, is marked one-way and that has ref parameters then you’ll get an error message when you try to reference such a service. 1: [OperationContract(Name = "Sum", IsOneWay = true)] 2: void Sum(ref double arg1, ref double arg2); 3:  4: public void Sum(ref double arg1, ref double arg2) 5: { 6: arg1 += arg2; 7: } This is what I got when I did an update to my service reference: Makes sense, because a OneWay operation is… one-way – there’s no returning from this operation. You can also have a one-way web method: 1: [SoapDocumentMethod(OneWay = true)] 2: [WebMethod(MessageName = "HelloAnyGalaxy")] 3: public void HelloGalaxy(ref string galaxyName) This will throw an exception message similar to the one above when you try to update your web service reference. In the CLR space, there’s no such concept of a ‘one-way’ street! Yes, there’s void, but you very well can have ref parameters returned through such a method. Just a point here; although the ref/out concept sounds cool, it’s generally is a code-smell. The better approach is to always return an object that is composed of everything you need returned from a method. These are some of the differences that we need to bear when dealing with services that are different from our daily ‘CLR’ life.

    Read the article

  • cron not even sending local mail to /var/mail/

    - by Yang
    I'm using a very plain Ubuntu Server 9.04, and cron isn't delivering any mail to my /var/mail/USER (the file hasn't even been created). Here's my full crontab: # m h dom mon dow command 15 * * * * $HOME/.cron/sync-bookmarks.bash If I add # m h dom mon dow command 15 * * * * $HOME/.cron/sync-bookmarks.bash >& /tmp/log then I see the stdout and stderr in /tmp/log. I'm not (yet) interested in actual remote email delivery, just local delivery to the mail spool file. Why isn't mail working? Thanks in advance for any tips.

    Read the article

  • Amazon EC2 EBS volume scheduled backup/snapshots using puppet / similar tools

    - by Ehrann Mehdan
    I am not a Linux admin, although I wish I was, and I have seen these questions Amazon EC2 Backup Strategy Amazon EC2 + EBS:: Regular backup plan? Simple Backup Strategy for Amazon EC2 instances / volumes? And this suggestion http://alestic.com/2009/09/ec2-consistent-snapshot I tried using command line + crontab (the command line works, but crontab for some reason, doesn't) But I'm still pretty lost, all I want is an automated, rolling backup of my amazon EC2 (EBS) data (by rolling I mean keep 3-4 weeks back, but delete old snapshots as new ones come for cost control) And as things usually go, if there is something that is hard and painful, someone creates a solution for it. My question is simple, is there a way using a tool like Puppet to do it without a painful learning curve? (or via other tools like http://ylastic.com) If yes, how?

    Read the article

  • setting up eclim to support php

    - by tipu
    i have the plugin pdt installed with my eclim using: DISPLAY=:1 ./eclipse/eclipse -nosplash -consolelog -debug \ -application org.eclipse.equinox.p2.director \ -repository http://download.eclipse.org/releases/helios \ -installIU org.eclipse.php.feature.group i compiled the thing using dargs for php: ant -Declipse.home=/home/tipu/downloads/eclipse -Dplugins=php but creating a project gives me: java.lang.IllegalArgumentException: Unable to find nature for alias 'php'. Supported aliases include: javascript=org.eclipse. wst.jsdt.core.jsNature, java=org.eclipse.jdt.core.javanature while executing command (port: 9091): -editor vim -command project_create -f "/home/tipu/phpproj2/" -n php thoughts on how to fix?

    Read the article

  • CentOS 5.7 keeps rebooting after fresh installation

    - by Wagner Maestrelli
    I have just installed CentOS 5.7 x86_64 on a new computer. The installation went on without any issues. But, after it finnished, the machine started to show an awkward behaviour: it restarts every time it tries to boot. It happens after all the services have been started. The screen just goes black and it shows an error message from the monitor: Input not supported. And then it reboots. I took a look at the logs, but I couldn't manage to find anything. Any help? Update Before doing the hardware diagnosis, as pointed out, I decided to make some tests. First, I changed the runlevel to 3, adding the 3 parameter at the end of the kernel command. Then, after logging in in text mode, I checked the xorg.conf file out for some problems regarding the screen resolution. There was nothing unexpected set. Well, if there had to be a problem with it, I couldn't start the X server at the command line, right? So, I typed startx and Gnome started! So, probably, it's not an issue with the screen resolution, I suppose. Then I selected the Log Out root... Gnome menu option and something odd happened: the screen went black, the Input not supported monitor error message was displayed and the system rebooted. Yes, the same problem I was having while trying to boot! After that, I decided to try yet another test: I removed the rhgb quiet parameters from the kernel command to see if some error would show up. Well, to my surprise, the boot went on without problems! The Gnome login screen showed up, I logged in and the session started. But then I selected the Shut Down... menu option and guess what? Same problem: black screen, same monitor error and the system rebooted. Yes, it rebooted, it did not shut down. I repeated both of the tests and the behaviours were the same. I really don't know what's going on. It seems to be an issue regarding the changing of the screen mode or something like that. Any ideas? Could this be a hardware problem? Or does it seem to be something regarding the system configuration?

    Read the article

  • Slow Memcached: Average 10ms memcached `get`

    - by Chris W.
    We're using Newrelic to measure our Python/Django application performance. Newrelic is reporting that across our system "Memcached" is taking an average of 12ms to respond to commands. Drilling down into the top dozen or so web views (by # of requests) I can see that some Memcache get take up to 30ms; I can't find a single use of Memcache get that returns in less than 10ms. More details on the system architecture: Currently we have four application servers each of which has a memcached member. All four memcached members participate in a memcache cluster. We're running on a cloud hosting provider and all traffic is running across the "internal" network (via "internal" IPs) When I ping from one application server to another the responses are in ~0.5ms Isn't 10ms a slow response time for Memcached? As far as I understand if you think "Memcache is too slow" then "you're doing it wrong". So am I doing it wrong? Here's the output of the memcache-top command: memcache-top v0.7 (default port: 11211, color: on, refresh: 3 seconds) INSTANCE USAGE HIT % CONN TIME EVICT/s GETS/s SETS/s READ/s WRITE/s cache1:11211 37.1% 62.7% 10 5.3ms 0.0 73 9 3958 84.6K cache2:11211 42.4% 60.8% 11 4.4ms 0.0 46 12 3848 62.2K cache3:11211 37.5% 66.5% 12 4.2ms 0.0 75 17 6056 170.4K AVERAGE: 39.0% 63.3% 11 4.6ms 0.0 64 13 4620 105.7K TOTAL: 0.1GB/ 0.4GB 33 13.9ms 0.0 193 38 13.5K 317.2K (ctrl-c to quit.) ** Here is the output of the top command on one machine: ** (Roughly the same on all cluster machines. As you can see there is very low CPU utilization, because these machines only run memcache.) top - 21:48:56 up 1 day, 4:56, 1 user, load average: 0.01, 0.06, 0.05 Tasks: 70 total, 1 running, 69 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.3%st Mem: 501392k total, 424940k used, 76452k free, 66416k buffers Swap: 499996k total, 13064k used, 486932k free, 181168k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6519 nobody 20 0 384m 74m 880 S 1.0 15.3 18:22.97 memcached 3 root 20 0 0 0 0 S 0.3 0.0 0:38.03 ksoftirqd/0 1 root 20 0 24332 1552 776 S 0.0 0.3 0:00.56 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 4 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0 5 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kworker/u:0 6 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0.0 0.0 0:00.62 watchdog/0 8 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 cpuset 9 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper ...output truncated...

    Read the article

  • EC2 instance store cloning or to ebs via gui management console

    - by devnull
    I have found similar questions here but the answer are either outdated or are from the command line. The case is this. I have an EC2 instance using instance store (this was the only AMI available for Debian 6 in Ireland). Now through the AWS GUI I can do a snapshot of the instance volume and/or even create a volume. But an image made from the snapshot doesn't boot. What is the best solution to either clone an EC2 instance that uses instance store OR from the created snapshot of the instance store to launch a new EBS instance (identical clone) FROM the gui aws management console and not command line ? Before turning this down consider that there is not similar question on how to do it via the aws management console. hint can't be done is not an appropriate answer. As you can create a snapshot of the instance store backed instance and/or a volume and create an AMI from that snapshot.

    Read the article

  • Ubuntu not showing disk

    - by ojek
    I have a laptop which had broken windows 7 installed on it. I created a ubuntu live usb and tried installing ubuntu over that win7. After a few minutes, I got an error message, so I needed to restart the computer. Now the laptop says that there is no bootable device - reasonable message given that there was an error during linux installation. But: Bios can see my hard drive, When I start ubuntu in live mode, and try either sudo fdisk -l or gparted, it doesn't show any hard disk drives. I am 90% sure that hdd is broken, but it is wierd that bios can see it, and ubuntu doesn't. How can I be 100% sure about that hdd? Is there any additional way of detecting my hdd from ubuntu?

    Read the article

  • Automated git push attempt does not work - authentication issue

    - by at least three characters
    I'm trying to automate a very periodic git add/commit/push cycle using a shell script and cron under OS X 10.8.5. The script is as basic as one would expect it to be: cd /my/directory git add . git commit -m "a commit message with the date" git push -u origin master I've tried running it both as root as well as a non-root user. When I do this manually, I get a dialog box from OS X requesting that I authenticate the operation. Running the script (either using cron or just using sh) ends up sending a message (via mail) to whichever user's cron executed the script saying that it was unable to write a file in the .git directory because of a permissions issue (which is most likely manual execution requires authentication). Is there any way to circumvent this issue, or give the script permission to perform this operation without having me intervene each time?

    Read the article

  • VMware ESXi - vSphere - Can't exit VM console access

    - by caleban
    I'm running ESXi 4.1 on a Dell T110 Server I connect to ESXi using vSphere vSphere is running inside a Windows 7 VM The Windows 7 VM is running in VMware Fusion on my Mac OS X system When I'm in vSphere and I've selected a VM and I click the console tab on some systems the VM console won't release me when I press the control + command keys. pfSense (FreeBSD) and Ubuntu Server behave like this. I can't exit their console screen. I have to shut down these VM's to be released from their VM console access. Windows, Ubuntu Desktop, etc. all behave like I'd expect; When I press the control + command keys I'm released from the VM console and I'm able to navigate in vSphere. Does anyone know what might be causing this or a way around this? Thanks in advance.

    Read the article

  • Nagios3: Conditional operators for service checks?

    - by Dave
    I'm trying to setup Nagios to monitor my various using hostgroups to define 'machine roles', against which I run services to check the machines by role. However, I'd like to use conditional operators that would enable me to run the service check against an intersection of two host groups, rather than their unions... i.e. using &&, ||, or () operators. For example, imagine I have the following servers: www-eu: Linux WWW (Apache) server, in the EU www-us: Windows WWW (IIS) server, in the US (West coast) ftp-eu: Linux FTP server, in the EU ftp-us: Windows FTP server, in the US I would want to create the following host groups: US-Servers: www-us, ftp-us EU-Servers: www-eu, ftp-eu WWW-Servers: www-us, www-eu FTP-Servers: ftp-us, ftp-eu Now say I'm interested in checking the HTTP response time for my web servers. Then let's say this particular Nagios service is running from the US (West Coast), and that I have a command called *check_http_response_time*. This command will check the responsiveness of the HTTP server, which I can provide an argument which defines the max response time before raising critical. My command might look like: check_http_response_time $HOSTNAME$ 50 Now traditionally, I can run my checks by specifying a list of host or hostgroups. define service{ use local-service hostgroup_name WWW-Servers # Servers = www-us, www-eu servicegroups WWW Checks service_description Check HTTP Response Time check_command check_http_response_time!50 } However, with the above service definition, given my Nagios service is in US West, I could reasonably expect that my EU server will return critical. Really, I want different thresholds for each region (50 for US West, 200 for EU.) I would have to permutate my service for each host and set their custom threshold, or alternatively permutate out my service groups by role & region (i.e. WWW-Servers-EU), and run my specific thresholds against those. Though the latter is better, both are much messier than I'd like... What I would love, and what this post is asking for, is a way to use hostgroups to perform an intersection using conditional logic, rather than a simple union. It might look like: define service{ use local-service hostgroup_name WWW-Servers && US-Servers servicegroups WWW Checks service_description Check HTTP Response Time check_command check_http_response_time!50 } It then would run the check only against servers that are in both WWW-Servers and US-Servers, in my example, just www-us. The benefits of such a feature would be significant for Nagios services configured for large-scale. Is this feature available? If it isn't, will it be available in the future? Is there an alternative way to accomplish this given the most recent Nagios version? Any tips/suggestions are most appreciated! Dave

    Read the article

  • Remote connection IP to use

    - by petwho
    I have two laptops that both run on ubuntu and installed ssh server and ssh client on them. One is usually on my desk at home and one I usually bring to my company. When I'm at home I can easily ssh to one from the other by typing this command (to login to the other laptop whose IP address is: 192.168.0.105) : ssh -p 22 [email protected] However, When I'm at my company, I try to type the same command and ofcourse it doesn't work. I understand that when at home I'm on LAN network, that my laptops actually using my ISP's address which differ from 192.168.0.107 asummed 203.113.131.1. So could you tell me what IP that ssh shoud use for my laptop (at work) to connet to my computer at home? Thank you.

    Read the article

  • Getting GTalk and Alt-Tab to play nice

    - by Steve Armstrong
    I'm running GTalk on Windows 7, and it refuses to work properly with Alt-Tab. Let's say I've got 2 message windows open (msg1 and msg2) and the contact list, as well as Firefox. Alt-tab from Firefox to msg1 works, but now the contact list is the most recent thing in the alt-tab list (not Firefox as expected). Then there's the problem that I can't use alt-tab to select a specific message window, and when switching back to msg2 (showing msg2 in the alt-tab window) it might switch back and have focus on msg1. I found this thread complaining about the same problem, but it's over a year old, and I'm hoping some progress has been made.

    Read the article

  • USB Mouse and Keyboard not working in Linux 4 Tegra

    - by Sijo
    I am a new person in Tegra Linux development. I have Tamontem NG Evaluation board with Tegra 3 Chip. I installed L4T sample file system from NVIDIA tegra Resources (https://developer.nvidia.com/linux-tegra) and installed the file system as described in the documentation provided in NVIDIA site. Already these was an SD card with L4T running. i dont want to change the boot loader. So I copied the boot.scr.uimg to root (/) folder and uImage to boot(/boot/) and it starts booting from the existing SD card. After that while booting, some errors occurred in some Bluetooth devices (there is no bluetooth device in the board). So I disabled Bluetooth by giving the following command sudo mv /etc/init/bluetooth.conf /etc/init/bluetooth.conf.noexec Now the problem is that mouse and keyboard are not working. So i cannot login. Even though i installed desktop, the mouse and keyboard are not working. But mouse and keyboard are enumerating. lsusb command is showing the USB mouse and keyboard. The installed file system is Ubuntu 13.04. Linux Kernel version is 3.1 What to do. Please help.Thanks in Advance.

    Read the article

  • Do any Windows IM clients support Adium styles?

    - by daxelrod
    I know I can't actually get Adium on Windows. Are there any Windows IM clients that at least support Adium styles, specifically Contact List Styles and Message Styles? Pidgin is heart-breakingly close, but as far as I can tell, it's not there yet: Pidgin-WebKit would be perfect, except it doesn't seem to compile on Windows. adium2pidgin-themes converts Adium Xtras into Pidgin themes, but only supports sound, status, and emoticon theme types: -t TYPE, --type=TYPE type of theme, may be: auto, sound, status or emoticon, default: auto The Pidgin project is considering merging Pidgin-WebKit into Pidgin itself, but that sounds like a long way off: Most notably, we've been talking about merging the webkit integration branch into what will become 3.0.0. Eventually, this would allow the support of Adium's message styles, although it may not happen right away. So, are there any Windows IM clients that support Adium styles today?

    Read the article

  • IIS7 Windows Server 2008 FTP -> Response: 530 User cannot log in

    - by RSolberg
    I just launched my first IIS FTP site following many of the tutorials from IIS.NET... I'm using IIS Users and Permissions rather than anonymous and/or basic. This is what I'm seeing while trying to establish the connection... Status: Resolving address of ftp.mydomain.com Status: Connecting to ###.###.##.###:21... Status: Connection established, waiting for welcome message... Response: 220 Microsoft FTP Service Command: USER MyFTPUser Response: 331 Password required for MyFTPUser. Command: PASS ******************** Response: 530 User cannot log in. Error: Critical error Error: Could not connect to server

    Read the article

  • In SharePoint, why can I "multiple document upload" a 47,297 byte file, but not a 47,298 byte file?

    - by Jim
    It's strange. I can upload a document named 47k.txt that is 47,297 bytes using the "Multiple Document Upload" feature. However, if I add a single character to the end of the text file, the upload fails. Also, if I rename the file to 47k*x*.txt and try to upload it, it fails. This is the error I get in the SharePoint logs: Category: General Event ID: 8jzm Level: High Message: #90012: An error was encountered while processing files on the server. Try uploading one file at a time by using the single upload page. The same error is reported in a message box on the client side. Does anybody know why this would happen?

    Read the article

  • Running IE on OSX with WineBottler: Can't find wine?

    - by AP257
    So I want to run IE7 on OSX 10.6 with WineBottler. I saw it was possible to run IE on Mac with WineBottler, following these instructions. I installed WineBottler and IE7. All was looking good. However, when I tried to open IE7 from the Applications menu, I got an error message: "Can't find Wine. Wine is required to run this program." I then installed wine-devel from macports (which was a bit fiddly as I hit this problem and had to update a lot of dependencies, but it did eventually build). However, even after doing that, I'm still seeing the 'Can't find wine' error message whenever I try to open IE7 or WineBottler. Could anyone advise? Do I need to start wine running somehow?

    Read the article

  • How to add a disclaimer to forwarded messages to outside domains in Exchange 2013?

    - by Vinícius Ferrão
    I would like to implement some kind of filter to add a disclaimer message within emails forwarded to outside domains. Today we have some users that setup filters to forward messages to external mail servers, as example @gmail addresses. So this kind of forward should be marked with the disclaimer message. Not the normal fwd messages. We have a Postfix mailfiltering gateway too, if it's simpler to implement this on the mail filter, it could be a viable option. What would be the best approach to handle this issue? Thanks,

    Read the article

  • Jenkins swarm-plugin jar file, won't run in background

    - by JeanMertz
    We're working on an automation script for our Jenkins slaves on a local Unix server. To connect the slaves to the Jenkins master, we use the swarm plugin. Setting up the master was easy, and connecting clients is also easy with a single command. However, I am trying to get the slave command (a java application) to run in the background without stalling the current process, this doesn't seem to work. I've created an init.d file and added it to update-rc.d but that doesn't work. #!/bin/bash /usr/bin/java -jar /root/swarm-client-1.7-jar-with-dependencies.jar -executors 4 I've also tried to run it with an ampersand & to start the process in the background, but that doesn't work either because - from looking at the source - the jar file actually boots another process that is then started in the foreground. Any ideas on how to make this jar file start without stopping the bootstrap script?

    Read the article

< Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >