Search Results

Search found 19676 results on 788 pages for 'hardware interface'.

Page 210/788 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • Can't get Wireless RT2x00usb driver to work, and can't blacklist it

    - by TheLQ
    After a two year hiatus to Linux, I try it again out again. And then I run into to driver issues... I have an old Linksys WUSB54G v4 Wireless USB Adapter. In previous versions I had to use a combination of Ndiswrapper and Wicd to hope of getting it working. In 10.10, apparently there are built in drivers for it. Unfortunately they don't work. Fails to connect to my WPA network, fails to connect to my open unencrypted network. Wicd fails at "Obtaining IP address" or when using static IPs fails at verifying connectivity to network. Getting fed up I tried the ndiswrapper approach. Installed and configured, but still not working, even when blacklisting the rt2570 module. So for some debugging I added some lines to my /etc/modprobe.d/blacklist.conf file blacklist rt2570 blacklist prism54usb blacklist rt2x00lib blacklist rt2x00usb Restart and find this: lordquackstar@quackbeast:/etc/modprobe.d$ lsmod | grep rt2 rt2500usb 18049 0 rt2x00usb 9779 1 rt2500usb rt2x00lib 27275 2 rt2500usb,rt2x00usb led_class 2633 1 rt2x00lib mac80211 231541 2 rt2x00usb,rt2x00lib cfg80211 144470 2 rt2x00lib,mac80211 Seems to be ignored... Tried this: lordquackstar@quackbeast:/etc/modprobe.d$ sudo rmmod -f rt2x00usb ERROR: Removing 'rt2x00usb': Resource temporarily unavailable lordquackstar@quackbeast:/etc/modprobe.d$ sudo rmmod -f rt2x00lib ERROR: Removing 'rt2x00lib': Resource temporarily unavailable and couldn't connect. Restarted and was back to the same modules loading. Maybe there's something in the log: lordquackstar@quackbeast:/etc/modprobe.d$ tail -n100000 /var/log/syslog | grep rt2 Dec 13 19:01:15 quackbeast kernel: [ 23.698056] Registered led device: rt2500usb-phy0::radio Dec 13 19:01:15 quackbeast kernel: [ 23.698140] Registered led device: rt2500usb-phy0::quality Dec 13 19:01:15 quackbeast kernel: [ 23.701680] usbcore: registered new interface driver rt2500usb Dec 13 19:01:15 quackbeast NetworkManager[855]: <info> (wlan0): new 802.11 WiFi device (driver: 'rt2500usb' ifindex: 4) Dec 13 19:17:47 quackbeast kernel: [ 23.521759] Registered led device: rt2500usb-phy0::radio Dec 13 19:17:47 quackbeast kernel: [ 23.521824] Registered led device: rt2500usb-phy0::quality Dec 13 19:17:47 quackbeast kernel: [ 23.524740] usbcore: registered new interface driver rt2500usb Dec 13 19:17:47 quackbeast NetworkManager[798]: <info> (wlan0): new 802.11 WiFi device (driver: 'rt2500usb' ifindex: 4) Seems to be autoloading. So this means that even if I pull it out, remove the module, and get it working, it still won't work when its plugged in all the time. More info: lordquackstar@quackbeast:/etc/modprobe.d$ sudo lshw -C Network *SNIP* *-network description: Wireless interface physical id: 1 bus info: usb@1:2 logical name: wlan0 serial: 00:12:17:9b:f3:1e capabilities: ethernet physical wireless configuration: broadcast=yes driver=rt2500usb driverversion=2.6.35-24-generic firmware=N/A link=no multicast=yes wireless=IEEE 802.11bg USB: lordquackstar@quackbeast:/etc/modprobe.d$ lsusb | grep -i rt Bus 001 Device 003: ID 13b1:000d Linksys WUSB54G v4 802.11g Adapter [Ralink RT2500USB] Any suggestions on how to either fix the rt2x00usb driver or permanently block it from loading? Note that I already have ndiswrapper installed

    Read the article

  • Can't access Windos 7 OS in Ubuntu

    - by Myers
    When I try and Access my Windows 7 OS I get the following memo and I have no idea what to make of the code. Error mounting /dev/sda2 at /media/name/Windows7_OS: Command-line `mount -t "ntfs" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177" "/dev/sda2" "/media/name/Windows7_OS"' exited with non-zero exit status 13: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Failed to read NTFS $Bitmap: Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details. What should I do ?

    Read the article

  • A Look Inside JSR 360 - CLDC 8

    - by Roger Brinkley
    If you didn't notice during JavaOne the Java Micro Edition took a major step forward in its consolidation with Java Standard Edition when JSR 360 was proposed to the JCP community. Over the last couple of years there has been a focus to move Java ME back in line with it's big brother Java SE. We see evidence of this in JCP itself which just recently merged the ME and SE/EE Executive Committees into a single Java Executive Committee. But just before that occurred JSR 360 was proposed and approved for development on October 29. So let's take a look at what changes are now being proposed. In a way JSR 360 is returning back to the original roots of Java ME when it was first introduced. It was indeed a subset of the JDK 4 language, but as Java progressed many of the language changes were not implemented in the Java ME. Back then the tradeoff was still a functionality, footprint trade off but the major market was feature phones. Today the market has changed and CLDC, while it will still target feature phones, will have it primary emphasis on embedded devices like wireless modules, smart meters, health care monitoring and other M2M devices. The major changes will come in three areas: language feature changes, library changes, and consolidating the Generic Connection Framework.  There have been three Java SE versions that have been implemented since JavaME was first developed so the language feature changes can be divided into changes that came in JDK 5 and those in JDK 7, which mostly consist of the project Coin changes. There were no language changes in JDK 6 but the changes from JDK 5 are: Assertions - Assertions enable you to test your assumptions about your program. For example, if you write a method that calculates the speed of a particle, you might assert that the calculated speed is less than the speed of light. In the example code below if the interval isn't between 0 and and 1,00 the an error of "Invalid value?" would be thrown. private void setInterval(int interval) { assert interval > 0 && interval <= 1000 : "Invalid value?" } Generics - Generics add stability to your code by making more of your bugs detectable at compile time. Code that uses generics has many benefits over non-generic code with: Stronger type checks at compile time. Elimination of casts. Enabling programming to implement generic algorithms. Enhanced for Loop - the enhanced for loop allows you to iterate through a collection without having to create an Iterator or without having to calculate beginning and end conditions for a counter variable. The enhanced for loop is the easiest of the new features to immediately incorporate in your code. In this tip you will see how the enhanced for loop replaces more traditional ways of sequentially accessing elements in a collection. void processList(Vector<string> list) { for (String item : list) { ... Autoboxing/Unboxing - This facility eliminates the drudgery of manual conversion between primitive types, such as int and wrapper types, such as Integer.  Hashtable<Integer, string=""> data = new Hashtable<>(); void add(int id, String value) { data.put(id, value); } Enumeration - Prior to JDK 5 enumerations were not typesafe, had no namespace, were brittle because they were compile time constants, and provided no informative print values. JDK 5 added support for enumerated types as a full-fledged class (dubbed an enum type). In addition to solving all the problems mentioned above, it allows you to add arbitrary methods and fields to an enum type, to implement arbitrary interfaces, and more. Enum types provide high-quality implementations of all the Object methods. They are Comparable and Serializable, and the serial form is designed to withstand arbitrary changes in the enum type. enum Season {WINTER, SPRING, SUMMER, FALL}; } private Season season; void setSeason(Season newSeason) { season = newSeason; } Varargs - Varargs eliminates the need for manually boxing up argument lists into an array when invoking methods that accept variable-length argument lists. The three periods after the final parameter's type indicate that the final argument may be passed as an array or as a sequence of arguments. Varargs can be used only in the final argument position. void warning(String format, String... parameters) { .. for(String p : parameters) { ...process(p);... } ... } Static Imports -The static import construct allows unqualified access to static members without inheriting from the type containing the static members. Instead, the program imports the members either individually or en masse. Once the static members have been imported, they may be used without qualification. The static import declaration is analogous to the normal import declaration. Where the normal import declaration imports classes from packages, allowing them to be used without package qualification, the static import declaration imports static members from classes, allowing them to be used without class qualification. import static data.Constants.RATIO; ... double r = Math.cos(RATIO * theta); Annotations - Annotations provide data about a program that is not part of the program itself. They have no direct effect on the operation of the code they annotate. There are a number of uses for annotations including information for the compiler, compiler-time and deployment-time processing, and run-time processing. They can be applied to a program's declarations of classes, fields, methods, and other program elements. @Deprecated public void clear(); The language changes from JDK 7 are little more familiar as they are mostly the changes from Project Coin: String in switch - Hey it only took us 18 years but the String class can be used in the expression of a switch statement. Fortunately for us it won't take that long for JavaME to adopt it. switch (arg) { case "-data": ... case "-out": ... Binary integral literals and underscores in numeric literals - Largely for readability, the integral types (byte, short, int, and long) can also be expressed using the binary number system. and any number of underscore characters (_) can appear anywhere between digits in a numerical literal. byte flags = 0b01001111; long mask = 0xfff0_ff08_4fff_0fffl; Multi-catch and more precise rethrow - A single catch block can handle more than one type of exception. In addition, the compiler performs more precise analysis of rethrown exceptions than earlier releases of Java SE. This enables you to specify more specific exception types in the throws clause of a method declaration. catch (IOException | InterruptedException ex) { logger.log(ex); throw ex; } Type Inference for Generic Instance Creation - Otherwise known as the diamond operator, the type arguments required to invoke the constructor of a generic class can be replaced with an empty set of type parameters (<>) as long as the compiler can infer the type arguments from the context.  map = new Hashtable<>(); Try-with-resource statement - The try-with-resources statement is a try statement that declares one or more resources. A resource is an object that must be closed after the program is finished with it. The try-with-resources statement ensures that each resource is closed at the end of the statement.  try (DataInputStream is = new DataInputStream(...)) { return is.readDouble(); } Simplified varargs method invocation - The Java compiler generates a warning at the declaration site of a varargs method or constructor with a non-reifiable varargs formal parameter. Java SE 7 introduced a compiler option -Xlint:varargs and the annotations @SafeVarargs and @SuppressWarnings({"unchecked", "varargs"}) to supress these warnings. On the library side there are new features that will be added to satisfy the language requirements above and some to improve the currently available set of APIs.  The library changes include: Collections update - New Collection, List, Set and Map, Iterable and Iteratator as well as implementations including Hashtable and Vector. Most of the work is too support generics String - New StringBuilder and CharSequence as well as a Stirng formatter. The javac compiler  now uses the the StringBuilder instead of String Buffer. Since StringBuilder is synchronized there is a performance increase which has necessitated the wahat String constructor works. Comparable interface - The comparable interface works with Collections, making it easier to reuse. Try with resources - Closeable and AutoCloseable Annotations - While support for Annotations is provided it will only be a compile time support. SuppressWarnings, Deprecated, Override NIO - There is a subset of NIO Buffer that have been in use on the of the graphics packages and needs to be pulled in and also support for NIO File IO subset. Platform extensibility via Service Providers (ServiceLoader) - ServiceLoader interface dos late bindings of interface to existing implementations. It helpe to package an interface and behavior of the implementation at a later point in time.Provider classes must have a zero-argument constructor so that they can be instantiated during loading. They are located and instantiated on demand and are identified via a provider-configuration file in the METAINF/services resource directory. This is a mechansim from Java SE. import com.XYZ.ServiceA; ServiceLoader<ServiceA> sl1= new ServiceLoader(ServiceA.class); Resources: META-INF/services/com.XYZ.ServiceA: ServiceAProvider1 ServiceAProvider2 ServiceAProvider3 META-INF/services/ServiceB: ServiceBProvider1 ServiceBProvider2 From JSR - I would rather use this list I think The Generic Connection Framework (GCF) was previously specified in a number of different JSRs including CLDC, MIDP, CDC 1.2, and JSR 197. JSR 360 represents a rare opportunity to consolidated and reintegrate parts that were duplicated in other specifications into a single specification, upgrade the APIs as well provide new functionality. The proposal is to specify a combined GCF specification that can be used with Java ME or Java SE and be backwards compatible with previous implementations. Because of size limitations as well as the complexity of the some features like InvokeDynamic and Unicode 6 will not be included. Additionally, any language or library changes in JDK 8 will be not be included. On the upside, with all the changes being made, backwards compatibility will still be maintained. JSR 360 is a major step forward for Java ME in terms of platform modernization, language alignment, and embedded support. If you're interested in following the progress of this JSR see the JSR's java.net project for details of the email lists, discussions groups.

    Read the article

  • How do I get wireless to work on a Dell Mini 1018 with Realtek RTL 8188CE Network Adapter?

    - by Laurence Nagel
    My Dell Mini 1018 used to work fine with Ubuntu 12.04 with no problems connecting with the wireless card. Then I made the big mistake of attempting to disable WiFi by right-clicking on the wireless applet and un-checking "Enable Wireless." Since then, I cannot get wireless working again. The wireless applet reports that "wireless is disabled by hardware switch" but there is no hardware switch on this computer. Issuing a rfkill list yields the following: 0: dell-wifi: Wireless LAN Soft blocked: yes Hard block: yes 1: phy0: Wireless LAN Soft blocked: no Hard blocked: yes I'm in way over my head, and my computer is now useless. Help!!!

    Read the article

  • TFS 2012 Upgrade and SQL Server - SharePoint - OS Requirements.

    - by Vishal
    Hello folks,Recently I was involved in Installation and Configuration of Team Foundation Server 2010 Farm for a client. A month after the installation and configuration was done and everything was working as it was supposed to, Microsoft released Team Foundation Server 2012 in mid August 2012. Well the company was using Borland Starteam as their source control and once starting to use TFS 2010, their developers and project managers were loving it since TFS is not just a source control tool and way much better then StarTeam. Anyways, long story short, they are now interested in thinking of upgrading to the newest version. Below are some basic Hardware and Software requirements for TFS 2012:Operating System:Windows Server 2008 with SP2 (only 64bit)Windows Server 2008 R2 with SP1 (only 64bit)Windows Server 2012 (only 64bit)SQL Server:SQL Server 2008 R2 and SQL Server 2012SQL Server 2008 is no longer supported.SQL Server Requirements for TFS.SharePoint Products:SharePoint Server 2010. (SharePoint Foundation 2010, Standard, Enterprise).MOSS 2007 (Standard, Enterprise)Windows SharePoint Services 3.0 (WSS 3.0)SharePoint Products Requirements for TFS.Project Server:Project Server 2010 with SP1.Project Server 2007 with SP2.Project Server Requirements for TFS.More information onf TFS Upgrade Requirements can be found here. Hardware Recommendations can be found here.Thanks,Vishal Mody

    Read the article

  • How to Quickly Check If Your Computer Can Run a PC Game

    - by Chris Hoffman
    PC gaming isn’t quite as simple as console gaming. If you have a laptop with weak graphics hardware or an older PC, it’s important to check whether your computer can support a game before you spend your hard-earned cash. Gamers don’t have to upgrade their hardware as often as they used to. Even a gaming PC built years ago should be able to handle the newest games just fine. Laptops not build for gaming and older PCs are a different matter.    

    Read the article

  • Updating scene graph in multithreaded game

    - by user782220
    In a game with a render thread and a game logic thread the game logic thread needs to update the scene graph used by the render thread. I've read about ideas such as a queue of updates. Can someone describe to a newbie at scene graphs what kind of interface the scene graph exports. Presumably it would be rather complicated. So then how does a queue of updates get implemented in C++ in a way that can handle the complexity of the interface of the scene graph while also being type safe and efficient. Again I'm a newbie at scene graphs and C++.

    Read the article

  • How to install Ubuntu over http in virtual manager?

    - by Bond
    Hi, I am having a situation where I can not use a CD or PxE boot or wubi to install.I need to necessarily do an http install of Ubuntu.I am basically trying to create a guest OS in a virtualization setup on Xen on a non VT hardware. On a non VT hardware the virt-manager does not allow to install from local ISO or PXE even the only option is via a URL on http:// Here is what I did: 1) Download ubuntu 10.04 32 bit ISO 2) Kept it in /var/www (apache2 is running) 3) renamed it to ubuntu.iso and when I reached a stage where installation begins I gave path hxxp://localhost/ubuntu.iso but I got an error any installable distribution not found. 4) After this I did mkdir /var/www/sk mount -t iso9660 /var/www/ubuntu.iso /var/www/sk -o loop and this time during the installation I gave path http://localhost/sk I was able to see the contents in browser http://localhost/sk which you will see in a normal CD. But beginning installation I got same error ValueError: Could not find an installable distribution at 'http://localhost/sk So I want to just confirm if http install is done only this way or some other way because the installation is not proceeding.

    Read the article

  • How to create a KDE start application menu launcher that launches the search and launch desktop?

    - by davorao
    Now i love the plasma-netbook aka the "search and launch" interface but i would want to be able to have it appear on demand and not sit as the desktop default in the back of my windows. So what i imagine is having a application launcher be created which on activation brings up the "search and launch" feature allowing for easy access of your favorites or have that slick interface to find what your looking for. I'm aware that krunner solves a similar use-case but search and launch would be more of an Unity Dash type of search, whereas krunner is more useful when u exactly know what your looking for. And to summarize my question, since I'm clueless as to how this would be done, I'm lookinf for someone who could tell me how hard it would be to do it.

    Read the article

  • Part 1 - Load Testing In The Cloud

    - by Tarun Arora
    Azure is fascinating, but even more fascinating is the marriage of Azure and TFS! Introduction Recently a client I worked for had 2 major business critical applications being delivered, with very little time budgeted for Performance testing, we immediately hit a bottleneck when the performance testing phase started, the in house infrastructure team could not support the hardware requirements in the short notice. It was suggested that the performance testing be performed on one of the QA environments which was a fraction of the production environment. This didn’t seem right, the team decided to turn to the cloud. The team took advantage of the elasticity offered by Azure, starting with a single test agent which was provisioned and ready for use with in 30 minutes the team scaled up to 17 test agents to perform a very comprehensive performance testing cycle. Issues were identified and resolved but the highlight was that the cost of running the ‘test rig’ proved to be less than if hosted on premise by the infrastructure team. Thank you for taking the time out to read this blog post, in the series of posts, I’ll try and cover the start to end of everything you need to know to use Azure to build your Test Rig in the cloud. But Why Azure? I have my own Data Centre… If the environment is provisioned in your own datacentre, - No matter what level of service agreement you may have with your infrastructure team there will be down time when the environment is patched - How fast can you scale up or down the environments (keeping the enterprise processes in mind) Administration, Cost, Flexibility and Scalability are the areas you would want to think around when taking the decision between your own Data Centre and Azure! How is Microsoft's Public Cloud Offering different from Amazon’s Public Cloud Offering? Microsoft's offering of the Cloud is a hybrid of Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) which distinguishes Microsoft's offering from other providers such as Amazon (Amazon only offers IaaS). PaaS – Platform as a Service IaaS – Infrastructure as a Service Fills the needs of those who want to build and run custom applications as services. Similar to traditional hosting, where a business will use the hosted environment as a logical extension of the on-premises datacentre. A service provider offers a pre-configured, virtualized application server environment to which applications can be deployed by the development staff. Since the service providers manage the hardware (patching, upgrades and so forth), as well as application server uptime, the involvement of IT pros is minimized. On-demand scalability combined with hardware and application server management relieves developers from infrastructure concerns and allows them to focus on building applications. The servers (physical and virtual) are rented on an as-needed basis, and the IT professionals who manage the infrastructure have full control of the software configuration. This kind of flexibility increases the complexity of the IT environment, as customer IT professionals need to maintain the servers as though they are on-premises. The maintenance activities may include patching and upgrades of the OS and the application server, load balancing, failover clustering of database servers, backup and restoration, and any other activities that mitigate the risks of hardware and software failures.   The biggest advantage with PaaS is that you do not have to worry about maintaining the environment, you can focus all your time in solving the business problems with your solution rather than worrying about maintaining the environment. If you decide to use a VM Role on Azure, you are asking for IaaS, more on this later. A nice blog post here on the difference between Saas, PaaS and IaaS. Now that we are convinced why we should be turning to the cloud and why in specific Azure, let’s discuss about the Test Rig. The Load Test Rig – Topology Now the moment of truth, Of course a big part of getting value from cloud computing is identifying the most adequate workloads to take to the cloud, so I’ve decided to try to make a Load Testing rig where the Agents are running on Windows Azure.   I’ll talk you through the above Topology, - User: User kick starts the load test run from the developer workstation on premise. This passes the request to the Test Controller. - Test Controller: The Test Controller is on premise connected to the same domain as the developer workstation. As soon as the Test Controller receives the request it makes use of the Windows Azure Connect service to orchestrate the test responsibilities to all the Test Agents. The Windows Azure Connect endpoint software must be active on all Azure instances and on the Controller machine as well. This allows IP connectivity between them and, given that the firewall is properly configured, allows the Controller to send work loads to the agents. In parallel, the Controller will collect the performance data from the agents, using the traditional WMI mechanisms. - Test Agents: The Test Agents are on the Windows Azure Public Cloud, as soon as the test controller issues instructions to the test agents, the test agents start executing the load tests. The HTTP requests are issued against the web server on premise, the results are captured by the test agents. And finally the results are passed over to the controller. - Servers: The Web Server and DB Server are hosted on premise in the datacentre, this is usually the case with business critical applications, you probably want to manage them your self. Recap and What’s next? So, in the introduction in the series of blog posts on Load Testing in the cloud I highlighted why creating a test rig in the cloud is a good idea, what advantages does Windows Azure offer and the Test Rig topology that I will be using. I would also like to mention that i stumbled upon this [Video] on Azure in a nutshell, great watch if you are new to Windows Azure. In the next post I intend to start setting up the Load Test Environment and discuss pricing with respect to test agent machine types that will be used in the test rig. Hope you enjoyed this post, If you have any recommendations on things that I should consider or any questions or feedback, feel free to add to this blog post. Remember to subscribe to http://feeds.feedburner.com/TarunArora.  See you in Part II.   Share this post : CodeProject

    Read the article

  • blurry lines between web application context layer, service layer and data access layer in spring

    - by thenaglecode
    I Originally asked this question in SO but on advice I have moved the question here... I'll admit I'm a spring newbie, but you can correct me if I'm wrong, this one liner looks kinda fishy in a best practices sort of way: @RepositoryRestResource(collectionResourceRel="people"...) public interface PersonRepository extends PagingAndSortingRepository<Person, Long> For those who are unaware, the following does many things: It is an interface definition that can be registered in an application context as a jpa repository, automagically hooking up all the default CRUD operations within a persistence context (that is externally configured). and also configures default controller/request-mapping/handler functionality at the namespace "/people" relative to your configured dispatcher servlet-mapping. Here's my point. I just crossed 3 conceptual layers with one line of code! this feels against my seperation-of-concern instincts but i wanted to hear your opinion. And for the sake of being on a question and answer site, I would like to know whether there is a better way of seperating these different layers - Service, Data, Controllers - whilst maintaining as minimal configuration as possible

    Read the article

  • Intel Centrino Wireless N 1000 doesn't work on a Lenovo Z560

    - by Timetraveler
    I upgraded my Ubuntu 11.04 to 11.10 and my Wifi has stopped working. I have a Lenovo Z560 that has Intel centrino wireless-N 1000. I have searched various threads having similar problems for a solution and have no success. The wlan0 is not even showing up in rfkill. Please help me find a solution. I am giving below the output of various debug commands. Thanks in advance. DISTRIB_ID=Ubuntu DISTRIB_RELEASE=11.10 DISTRIB_CODENAME=oneiric DISTRIB_DESCRIPTION="Ubuntu 11.10" ----##uname -a Linux gurucharapathy-laptop 3.0.0-17-generic-pae #30-Ubuntu SMP Thu Mar 8 17:53:35 UTC 2012 i686 i686 i386 GNU/Linux ----##lspci -nnk | grep -iA2 net 05:00.0 Network controller [0280]: Intel Corporation Centrino Wireless-N 1000 [8086:0084] Subsystem: Intel Corporation Centrino Wireless-N 1000 BGN [8086:1315] Kernel modules: iwlagn 06:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 02) Subsystem: Lenovo Device [17aa:392e] Kernel driver in use: r8169 ----##iwconfig lo no wireless extensions. eth0 no wireless extensions. ----##iwlist scan lo Interface doesn't support scanning. eth0 Interface doesn't support scanning. ----##rfkill list all 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: ideapad_wlan: Wireless LAN Soft blocked: no Hard blocked: no 2: ideapad_bluetooth: Bluetooth Soft blocked: no Hard blocked: no ----##lsmod Module Size Used by rfcomm 38408 8 bnep 17923 2 parport_pc 32114 0 ppdev 12849 0 binfmt_misc 17292 1 snd_hda_codec_hdmi 31426 1 snd_hda_codec_conexant 52460 1 uvcvideo 67271 0 videodev 85626 1 uvcvideo snd_hda_intel 28358 2 snd_hda_codec 91859 3 snd_hda_codec_hdmi,snd_hda_codec_conexant,snd_hda_intel snd_hwdep 13276 1 snd_hda_codec joydev 17393 0 snd_pcm 80435 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_seq_midi 13132 0 i915 509554 9 drm_kms_helper 32889 1 i915 snd_rawmidi 25241 1 snd_seq_midi snd_seq_midi_event 14475 1 snd_seq_midi snd_seq 51567 2 snd_seq_midi,snd_seq_midi_event snd_timer 28932 2 snd_pcm,snd_seq drm 196290 5 i915,drm_kms_helper snd_seq_device 14172 3 snd_seq_midi,snd_rawmidi,snd_seq mei 36466 0 mac80211 393421 0 snd 55902 14 snd_hda_codec_hdmi,snd_hda_codec_conexant,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device ideapad_laptop 13575 0 intel_ips 17753 0 btusb 18160 2 i2c_algo_bit 13199 1 i915 soundcore 12600 1 snd bluetooth 148839 23 rfcomm,bnep,btusb cfg80211 172427 1 mac80211 psmouse 63474 0 serio_raw 12990 0 snd_page_alloc 14108 2 snd_hda_intel,snd_pcm sparse_keymap 13658 1 ideapad_laptop wmi 18744 0 video 18908 1 i915 lp 17455 0 parport 40930 3 parport_pc,ppdev,lp ahci 21634 2 libahci 25761 1 ahci r8169 47200 0 ----##nm-tool NetworkManager Tool State: asleep Device: eth0 ----------------------------------------------------------------- Type: Wired Driver: r8169 State: unmanaged Default: no HW Address: 88:AE:1D:DE:5F:9C Capabilities: Carrier Detect: yes Speed: 100 Mb/s Wired Properties Carrier: on ----##lshw -C network *-network UNCLAIMED description: Network controller product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:05:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:d6400000-d6401fff *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:06:00.0 logical name: eth0 version: 02 serial: 88:ae:1d:de:5f:9c size: 100Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=N/A ip=192.168.0.100 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:2000(size=256) memory:d2410000-d2410fff memory:d2400000-d240ffff memory:d2420000-d243ffff

    Read the article

  • Broadcom BCM4313 wireless slow and high-latency

    - by Florin Andrei
    Ubuntu 12.10 64 bit on a Dell Latitude E6330 laptop. Wireless is pretty slow. It gets connected quick enough, but then it acts like a dialup connection. My ssh sessions over WiFi are slow and laggy. Even browsing is slow, the pages are loading like it's 1998. This does not depend on the access point, it's the same both at home and at work. Other systems work fine on these access points. I had an older Dell laptop before, different WiFi hardware, and it was much faster over the same wireless access points. Is this a known issue with this hardware? If so, any solutions?

    Read the article

  • Why should I use List<T> over IEnumerable<T>?

    - by Rowan Freeman
    In my ASP.net MVC4 web application I use IEnumerables, trying to follow the mantra to program to the interface, not the implementation. Return IEnumerable(Of Student) vs Return New List(Of Student) People are telling me to use List and not IEnumerable, because lists force the query to be executed and IEumerable does not. Is this really best practice? Is there any alternative? I feel strange using concrete objects where an interface could be used. Is my strange feeling justified?

    Read the article

  • Speaking at AMD Fusion conference

    - by Daniel Moth
    Next Wednesday at 2pm I will be presenting a session at the AMD Fusion developer summit in Bellevue, Washington State. For more on this conference please visit the official website. If you filter the catalog by 'Speaker Last Name' to "Moth", you'll find my talk. For your convenience, below is the title and abstract Blazing-fast code using GPUs and more, with Microsoft Visual C++ To get full performance out of mainstream hardware, high-performance code needs to harness, not only multi-core CPUs, but also GPUs (whether discrete cards or integrated in the processor) and other compute accelerators to achieve orders-of-magnitude speed-up for data parallel algorithms. How can you as a C++ developer fully utilize all that heterogeneous hardware from your Visual Studio environment? How can your code benefit from this tremendous performance boost without sacrificing your developer productivity or the portability of your solution? The answers will be presented in this session that introduces a new technology from Microsoft. Hope to see many of you there! Comments about this post welcome at the original blog.

    Read the article

  • Sound plays from laptop speakers only even when headphones are connected

    - by Ankush N Nayak
    I'm using a Dell Vostro 1014 laptop with Ubuntu 11.10. From the time I had Ubuntu 9.04, sound comes from the laptop speakers only even when I plug in earphones or external speakers. Also, if an external microphone plugged in, it does not work. Dell even replaced my motherboard to check for hardware issues but the problem persists. So it is definitely not a hardware issue. The sound plays perfectly through the headphones in the pre-boot diagnostics. So I figure, this must be a problem with Ubuntu not recognizing my sound card. Please try to give a solution for this problem.

    Read the article

  • Setting up Ubuntu Server as a Router with DHCPD and 3 Ethernet devices

    - by cengbrecht
    My configuration: Ubuntu 12.04 DHCP3-server eth0, eth1, eth2 Edit: removed br0&br1 eth0 is the external connection eth1 & eth2 are the internal network eth1 and eth2 are supposed to be seperate networks of student/teachers respectivly. What I would like to have is the internet from external device bridged to device 1 and 2, with the DHCP server controlling the two internal devices. Its already working with DHCP, the part I am stuck on is bridging for internet. I have setup a script that I found here: Router With the original script he linked here: Ubuntu Router Guide echo -e "\n\nLoading simple rc.firewall-iptables version $FWVER..\n" IPTABLES=/sbin/iptables #IPTABLES=/usr/local/sbin/iptables DEPMOD=/sbin/depmod MODPROBE=/sbin/modprobe EXTIF="eth0" INTIF="eth1" INTIF2="eth2" echo " External Interface: $EXTIF" echo " Internal Interface: $INTIF" echo " Internal Interface: $INTIF2" EXTIP=`ifconfig $EXTIF | grep 'inet addr:' | sed 's#.*inet addr\:\([0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\).*#\1#g'` echo " External IP: $EXTIP" #====================================================================== #== No editing beyond this line is required for initial MASQ testing == The rest of the script below this is as is. I can get ip from the eth1 & eth2 devices, and my computer can see them, and them it, however, internet is not being passed through. If you need more information please just let me know. EDIT: So I had a 255.255.254.0 network, I believe that was causing the issue. Not sure if it will matter on the second card, I will test later. After changing the subnet to 255.255.255.0 the pings will pass through, however, I cannot get DNS requests to pass? My new Config for Firewall Rules # /etc/iptables.up.rules # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *mangle :PREROUTING ACCEPT [39:4283] :INPUT ACCEPT [39:4283] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [12:4884] :POSTROUTING ACCEPT [13:5145] COMMIT # Completed on Wed Nov 28 19:43:28 2012 # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *filter :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A FORWARD -j LOG -A FORWARD -m state -i eth1 -o eth0 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth2 -o eth0 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth0 -o eth1 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth0 -o eth2 --state NEW,ESTABLISHED,RELATED -j ACCEPT COMMIT # Completed on Wed Nov 28 19:43:28 2012 # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *nat :INPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -o eth0 -j MASQUERADE -A POSTROUTING -o eth0 -j SNAT --to-source 192.168.1.25 COMMIT # Completed on Wed Nov 28 19:43:28 2012 Not sure what else you may need, but I am using Webmin to control the server(Needed for the operators on site to know how to use it.) If you could explain it as standard CLI commands, or edits to this file directly then we should be ok. :) And thanks again Erik, I do believe your edits did help.

    Read the article

  • unable to mount internal disk mount exited with exit code 13

    - by Masri
    My Ubuntu get into error when I try to mount one of my internal disks and it gives this error message: Error mounting: mount exited with exit code 13: $MFTMirr does not match $MFT (record 3). Failed to mount '/dev/sda7': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details. pls advise how to solve above error ,Many thanks to you in advance.

    Read the article

  • About Solaris 11 and UltraSPARC II/III/IV/IV+

    - by nospam(at)example.com (Joerg Moellenkamp)
    I know that I will get the usual amount of comments like "Oh, Jörg ? you can't be negative about Oracle" for this article. However as usual I want to explain the logic behind my reasoning. Yes ? I know that there is a lot of UltraSPARC III, IV and IV+ gear out there. But there are some very basic questions: Does your application you are currently running on this gear stops running just because you can't run Solaris 11 on it? What is the need to upgrade a system already in production to Solaris 11? I have the impression, that some people think that the systems get useless in the moment Oracle releases Solaris 11. I know that Sun sold UltraSPARC IV+ systems until 2009. The Sun SF490 introduced 2004 for example, that was a Sun SF480 with UltraSPARC IV and later with UltraSPARC IV+. And yes, Sun made some speedbumps. At that time the systems of the UltraSPARC III to IV+ generations were supported on Solaris 8, on Solaris 9 and on Solaris 10. However from my perspective we sold them to customers, which weren't able to migrate to Solaris 10 because they used applications not supported on Solaris 9 or who just didn't wanted to migrate to Solaris 10. Believe it or not ? I personally know two customers that migrated core systems to Solaris 10 in ? well 2008/9. This was especially true when the M3000 was announced in 2008 when it closed the darned single socket gap. It may be different at you site, however that's what I remember about that time when talking with customers. At first: Just because there is no Solaris 11 for UltraSPARC III, IV and IV+, it doesn't mean that Solaris 10 will go away anytime soon. I just want to point you to "Expect Lifetime Support - Hardware and Operating Systems". It states about Premier Support:Maintenance and software upgrades are included for Oracle operating systems and Oracle VM for a minimum of eight years from the general availability date.GA for Solaris 10 was in 2005. Plus 8 years ? 2013 ? at minimum. Then you can still opt for 3 years of "Extended Support" ? 2016 ? at minimum. 2016 your systems purchased in 2009 are 7 years old. Even on systems purchased at the very end of the lifetime of that system generation. That are the rules as written in the linked document. I said minimum The actual dates are even further in the future: Premier Support for Solaris 10 ends in 2015, Extended support ends 2018. Sustaining support ? indefinite. You will find this in the document "Oracle Lifetime Support Policy: Oracle Hardware and Operating Systems".So I don't understand when some people write, that Oracle is less protective about hardware investments than Sun. And for hardware it's the same as with Sun: Service 5 years after EOL as part of Premier Support. I would like to write about a different perspective as well: I have to be a little cautious here, because this is going in the roadmap area, so I will mention the public sources here: John Fowler told last year that we have to expect at at least 3x the single thread performance of T3 for T4. We have 8 cores in T4, as stated by Rick Hetherington. Let's assume for a moment that a T4 core will have the performance of a UltraSPARC core (just to simplify math and not to disclosing anything about the performance, all existing SPARC cores are considered equal). So given this pieces of information, you could consolidate 8 V215, 4 or 8 V245, 2 full blown V445,2 full blown 490, 2 full blown M3000 on a single T4 SPARC processor. The Fowler roadmap prezo talked about 4-socket systems with T4. So 32 V215, 16 to 8 V245, 8 fullblown V445, 8 full blown V490, 8 full blown M3000 in a system image. I think you get the idea. That said, most of the systems we are talking about have already amortized and perhaps it's just time to invest in new systems to yield other advantages like reduced space consumptions, like reduced power consumption, like some of the neat features sun4v gives you, and yes ? reduced number of processor licenses for Oracle and less money for Oracle HW/SW support. As much as I dislike it myself that my own UltraSPARC III and UltraSPARC II based systems won't run on Solaris 11 (and I have quite a few of them in my personal lab), I really think that the impact on production environments will be much less than most people think now. By the way: The reason for this move is a quite significant new feature. I will tell you that it was this feature, when it's out. I assume, telling just a word more could lead to much more time to blog.

    Read the article

  • Can't connect to WIFI after sleep Ubuntu 14.04

    - by user3380404
    After an upgrade over a previous version to Ubuntu 14.04 I have problems with connecting to WiFi. After suspending it won't reconnect to any existing network unless I restart the laptop. sudo lshw -C network: *-network description: Ethernet interface product: 82567LM Gigabit Network Connection vendor: Intel Corporation physical id: 19 bus info: pci@0000:00:19.0 logical name: eth0 version: 03 serial: 00:27:13:b4:d3:58 capacity: 1Gbit/s width: 32 bits clock: 33MHz capabilities: pm msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e1000e driverversion=2.3.2-k firmware=1.8-3 latency=0 link=no multicast=yes port=twisted pair resources: irq:45 memory:fc200000-fc21ffff memory:fc225000-fc225fff ioport:1840(size=32) *-network description: Wireless interface product: PRO/Wireless 5100 AGN [Shiloh] Network Connection vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 00 serial: 00:26:c6:c6:6b:c8 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.13.0-29-generic firmware=8.83.5.1 build 33692 ip=10.0.1.160 latency=0 link=yes multicast=yes wireless=IEEE 802.11abgn resources: irq:48 memory:f4200000-f4201fff

    Read the article

  • How often do CPUs make calculation errors?

    - by veryfoolish
    In Dijkstra's Notes on Structured Programming he talks a lot about the provability of computer programs as abstract entities. As a corollary, he remarks how testing isn't enough. E.g., he points out the fact that it would be impossible to test a multiplication function f(x,y) = x*y for any large values of x and y across the entire ranges of x and y. My question concerns his misc. remarks on "lousy hardware". I know the essay was written in the 1970s when computer hardware was less reliable, but computers still aren't perfect, so they must make calculation mistakes sometimes. Does anybody know how often this happens or if there are any statistics on this?

    Read the article

  • Proper way to measure the scalability of web Application

    - by Jorge
    Let's say that I have a web Application where i'm going to have 300 users and each one have to see data on real time, imagine that each client make an ajax call to the server to see in real time what's happens with the changes of the data, this calls are made each 300 ms per user. I know that i can run a simulation to see if the hardware of my server supports this example. But what happen's if the number of users start to grow up. Is there a way that i can measure the hardware needed to handle this growing behavior, a software, a formula, algorithm or maybe recommend me if i need to implement an distributed application with multiplies servers and balance the loads.

    Read the article

  • What data counters / meters are available?

    - by Santosh
    Actually I have a wireless 3G modem that works well on Windows based operating system, its interface software were made Windows centric. It can still connect to internet on Ubuntu or other linux based operating system but it won't show the data counter (the interface which shows how much data has been transferred, at what speed). If I continue to surf internet in Linux then I won't have any idea how much data has been used and it would become heavy on my pocket. So I just want a software that let me know how much data has been transferred, if there is a limiter; that warns or disconnects me when I reach predefined MBs then its better. Please let me know if there is any software or script or something like that already there.

    Read the article

  • Can't mount windows partition?

    - by C.J.
    When I try to open the Windows Partition from Ubuntu I receive the error: Unable to mount 55 GB Filesystem Error mounting: mount exited without exit code 13: ntfs_mst_post_read_fixup_warn: magic: 0x04010400 size: 1024 usa_ofs: 1026 usa_count: 1026: Invalid argument Record 6 has no FILE magic (0x4010400) Failed to open inode FILE_Bitmap: Input/output error Failed to mount '/dev/sda2': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more detail. Additionally, I can't open the Windows Partition. I've tried updating it many times but it won't show up on GRUB. Does anybody know what all this means? And how I might fix it? I thank you for any help in advance

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >