Search Results

Search found 25180 results on 1008 pages for 'post processing'.

Page 595/1008 | < Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >

  • Modifying %HOME% on Emacs 24.3 on Windows causes .emacs to not be found

    - by David
    In order for magit to read my git settings on Emacs 24.3.1 for Windows, I added the following configuration from a stack overflow post: (when (string-equal system-type "windows-nt") (setenv "HOME" (concat (getenv "HOMEDRIVE") (getenv "HOMEPATH"))) Interestingly, after this is added to my .emacs, Emacs thinks .emacs doesn't exist anymore. If I do M-x cd to ~ and then do M-x pwd it says ~ is located at C:\Users\Me\AppData\Roaming. It appears that emacs is reading the .emacs settings file because it loads my custom theme. However, if I try to find ~/.emacs Emacs doesn't see it. The file has all permissions on the file system for any user.

    Read the article

  • NetBeans Hangs on New Project Creation

    - by Jason
    When creating a New Project in NetBeans, it hangs on the creation screen; Screenshot I'm running Netbeans 7.0.1 on Xubuntu 13.04. Java -version prints: java version "1.7.0_25" OpenJDK Runtime Environment (IcedTea 2.3.10) (7u25-2.3.10-1ubuntu0.13.04.2) OpenJDK 64-Bit Server VM (build 23.7-b01, mixed mode) /usr/lib/jvm/ contains the following folders; java-1.5.0-gcj-4.7-amd64 java-6-openjdk-amd64 java-7-openjdk-common java-1.6.0-openjdk-amd64 java-6-openjdk-common java-1.7.0-openjdk-amd64 java-7-openjdk-amd64 I tried editing /usr/share/netbeans/7.0.1/etc/netbeans.conf with the following (as was suggested in another post); netbeans_jdkhome="/usr/lib/jvm/java-7-openjdk-amd64/jre/" But that didn't work. Instead, NetBeans then presented an error on the loading splash when it reached "Turning on Modules..." claiming the JDK was missing. Both NetBeans and the OpenJDK Java 7/6 Runtimes were installed through the Ubuntu Software Centre. I've also tried uninstalling and reinstalling both Java and NetBeans. Any help would be greatly appreciated, thanks!

    Read the article

  • My website is infected, I restored a backup of the uninfected files, how long will it take to un-mark as dangerous?

    - by Cyclone
    My website www.sagamountain.com was recently infected by a malware distributor (or at least I think it may have been). I have removed all external content, google ads, firefly chat, etc. I uploaded a backup from a few weeks ago, when there was no issue. I patched the SQL injection hole. Now, how long will it take to unmark it as dangerous? Where can I contact google? I am not sure if this is the right place to post it, but since it may have been a server issue I may as well. Can sites inject base64 code via a virus on the whole server, or is it only via sql injection? Thanks for the help, viruses freak me out. Is there an online virus scanner that can scan my page and tell me what is wrong?

    Read the article

  • Agile project management, agile development: early integration

    - by Matías Fidemraizer
    I believe that agile works if everything is agile. In software development area, in my opinion, if team members' code is integrated early, code will be more in sync and this has a lot of pros: Early integration helps team members to avoid painful merges. Encourages better coding habits, because everyone makes sure that they don't break co-workers' code everyday. Both developers and architects (code reviewers) may detect bad design decisions or just wrong development directions in real-time, preventing useless work. Actually I'm talking about getting the latest version of code base and checking-in your own code to the source control in a daily basis. When you start your coding day (i.e. you arrive to your work), your first action is updating your code base with the latest version from the source control. In the other hand, when you're about an hour to leave from your work and go home, your last action is checking-in your code to the source control and be sure that your day work doesn't break the project's build process. Rather than updating and checking-in your code once you finished an entire task, I believe the best approach is fixing small and flexible personal milestones and checking-in the code once you finish one of these. I really believe that this coding approach fits better in the agile project management concept. Do you know some document, blog post, wiki, article or whatever that you can suggest me that could be in sync with my opinion?. And, do you find any problem working with this approach?. Thank you in advance.

    Read the article

  • WSUS setup on ADS Domain Controller VM

    - by NickC
    How should I setup sharing/directory permissions to allow the following to work: VMHost - Hyper-V partition (not part of domain) ADServer - Active Directory Domain Controller running as guest VM on VMHost \VMHost\Updates disk share for WSUS to put the updates and other local files Problem is what permissions do I need to give to \VMHost\Updates to allow WSUS to work with this directory to avoid "The WSUS content directory is not accessible" error which I am currently seeing. As far as I can tell WSUS runs as "domain/Network Service" account. Question is without adding VMHost to the domain how do I give that user appropriate permissions to this directory? Is there a way that VMHost can be told to trust ADServer and then be able to use users accounts from there? Sort of relates to my other post here: Win 2012 Domain controller VM, should the Hyper-V host be part of the domain? Thanks, Nick

    Read the article

  • Is there a command-line utility app which can find a specific block of lines in a text file, and replace it?

    - by fred.bear
    UPDATE (see end of question) The text "search and replace" utility programs I've seen, seem to only search on a line-by-line basis... Is there a command-line tool which can locate one block of lines (in a text file), and replace it with another block of lines.? For example: Does the test file file contain this exact group of lines: 'Twas brillig, and the slithy toves Did gyre and gimble in the wabe: All mimsy were the borogoves, And the mome raths outgrabe. 'Beware the Jabberwock, my son! The jaws that bite, the claws that catch! Beware the Jubjub bird, and shun The frumious Bandersnatch!' I want this, so that I can replace multiple lines of text in a file and know I'm not overwriting the wrong lines. I would never replace "The Jabberwocky" (Lewis Carroll), but it makes a novel example :) UPDATE: ..(sub-update) My following comment about reasons when not use sed are only in the context of; don't push any tool too far beyond its design intent (I use sed quite often, and consider it to be invaluable.) I just now found an interesting web page about sed and when not to use it. So, because of all the sed answers, I"ll post the link.. it is part of the sed FAQ on sourceforge Also, I'm pretty sure there is some way diff can do the job of locating the block of text (once it's located, the replacement is quite straight foward; using head and tail) ... 'diff' dumps all the necessary data, but I haven't yet worked out how to filter it , ... (I'm still working on it)

    Read the article

  • How to send SMS Using SMS Email Gateway internationally?

    - by user35259
    Hi All, I don't know if this is the right place to post my question but, anyways, i will just ask. I'm trying to send SMS messages from a unix-based machine by sending a mail to [email protected]. This works very fine and messages are received normally. What I want to ask about is, am I able to use this service to send SMS messages to an international number? What about the country code? How will I add it and the format needs ten digits only? Thanks

    Read the article

  • C++ Numerical Recipes &ndash; A New Adventure!

    - by JoshReuben
    I am about to embark on a great journey – over the next 6 weeks I plan to read through C++ Numerical Recipes 3rd edition http://amzn.to/YtdpkS I'll be reading this with an eye to C++ AMP, thinking about implementing the suitable subset (non-recursive, additive, commutative) to run on the GPU. APIs supporting HPC, GPGPU or MapReduce are all useful – providing you have the ability to choose the correct algorithm to leverage on them. I really think this is the most fascinating area of programming – a lot more exciting than LOB CRUD !!! When you think about it , everything is a function – we categorize & we extrapolate. As abstractions get higher & less leaky, sooner or later information systems programming will become a non-programmer task – you will be using WYSIWYG designers to build: GUIs MVVM service mapping & virtualization workflows ORM Entity relations In the data source SharePoint / LightSwitch are not there yet, but every iteration gets closer. For information workers, managed code is a race to the bottom. As MS futures are a bit shaky right now, the provider agnostic nature & higher barriers of entry of both C++ & Numerical Analysis seem like a rational choice to me. Its also fascinating – stepping outside the box. This is not the first time I've delved into numerical analysis. 6 months ago I read Numerical methods with Applications, which can be found for free online: http://nm.mathforcollege.com/ 2 years ago I learned the .NET Extreme Optimization library www.extremeoptimization.com – not bad 2.5 years ago I read Schaums Numerical Analysis book http://amzn.to/V5yuLI - not an easy read, as topics jump back & forth across chapters: 3 years ago I read Practical Numerical Methods with C# http://amzn.to/V5yCL9 (which is a toy learning language for this kind of stuff) I also read through AI a Modern Approach 3rd edition END to END http://amzn.to/V5yQSp - this took me a few years but was the most rewarding experience. I'll post progress updates – see you on the other side !

    Read the article

  • Blank pale blue screen with Live USB Kubuntu on AMD Sempron 2800+ processor

    - by WGCman
    I am trying to install Kubuntu onto a USB stick to use on my Acer Aspire 1362 laptop with an AMD Sempron 2800+ chip. Using Windows XP, I downloaded and saved to the laptop's hard drive: kubuntu-2.04.1-desktop-i386.iso from the GetKubuntu website and LinuxLive USB Creator 2.8.16.exe from the Linux live website I then installed the latter and ran it, installing the kubuntu onto the Memory stick. Leaving the Bios setup unchanged, the USB stick is ignored and Windows boots. If I change the Bios boot order so that the memory stick takes precedence, I see a dark blue screen announcing Kubukntu 12.04, and on selecting either “live Mode” or “Persistent mode”, messages flash by quickly, some of which appear to be error messages, including “trying to unpack rootfs image as initramfs”, “cannot allocate resource for mainboard”, “no plug and play device found”. Eventually I see a pale blue screen with four moving dots announcing Kubukntu 12.04, similar to the login screen of my Kubuntu desktop, but no invitation to log in or indeed any dialog. After several minutes, this changes to a black screen with more messages including “no caching mode present”, “ADDRCONF(NETDEV_UP): wlan0: link is not ready”, then degrades to a blank pale blue screen which can only be moved by switching the computer off. Finding no way to log the error messages passing by, I managed to photograph most of them, but know no way to attach the photo to this forum. As suggested by User 68186 (to whom thanks!), I have edited my original post to reflect the recent progress, so the following two comments are now superseded.

    Read the article

  • AutoVue for Agile 20.2.2 Now Available!!

    - by Warren Baird
    We are happy to announce that AutoVue for Agile 20.2.2 is now available via the Oracle Software Delivery Cloud.   AutoVue for Agile 20.2.2 is a minor release within the 20.2 product family that is specifically targeted for users of Agile PLM 9. AutoVue 20.2.2 brings a number of improvements, including support for SolidWorks 2013, AutoCAD and Inventor 2014, SolidEdge ST5, and Cadence Allegro 16.6.   It also includes support for Adobe Illustrator CS4 and up.   Another improvement involves bringing our support for Oracle Linux and Java Virtual Machine versions in-line with Agile's support. Please see our previous post (https://blogs.oracle.com/enterprisevisualization/entry/autovue_20_2_2_is) for more details on the specifics introduced in AutoVue 20.2.2. Agile PLM 9.3.3 has also been released, which as part of its many improvements introduces support for associating AutoVue annotations with change request objects in Agile, and a preliminary solution using Augmented Business Visualization to allow the creation of change objects from within AutoVue.   Please see the Agile Transfer of Information sessions in the KM note 1589164.1 for more details. We will provide additional posts over the next couple of weeks providing more details on these improvements.  Until then, if you have any questions, let us know in the comments! 

    Read the article

  • Big Data – ClustrixDB – Extreme Scale SQL Database with Real-time Analytics, Releases Software Download – NewSQL

    - by Pinal Dave
    There are so many things to learn and there is so little time we all have. As we have little time we need to be selective to learn whatever we learn. I believe I know quite a lot of things in SQL but I still do not know what is around SQL. I have started to learn about NewSQL recently. If you wonder what is NewSQL I encourage all of you to read my blog post about NewSQL over here Big Data – Buzz Words: What is NewSQL – Day 10 of 21. NewSQL databases are quickly becoming popular – providing the scale of NoSQL with the SQL features and transactions. As a part of learning NewSQL database, I have recently started to learn about ClustrixDB. ClustrixDB has been the most mature NewSQL database used by some of the largest internet sites in the world for over 3 years, with extensive SQL support. In addition to scale, it provides fast real-time analytics by bringing massively parallel processing (MPP), available only in warehousing databases, to the transactional database. The reason I am more intrigued about learning ClustrixDB is their recent announcement on Oct 31. ClustrixDB was only available as an appliance, but now with their software release on Oct 31, everyone can use it. It is now available as forever free for up to 12 cores with community support, and there is a 45 day trial for unlimited cluster sizes. With the forever free world, I am indeed interested in ClustrixDB now. I know that few of the leading eCommerce sites in the world uses them for their transactional database. Here are few of the details I have quickly noted for ClustrixDB. ClustrixDB allows user to: Scale by simply adding nodes to the cluster with a single command Run billions of transactions a day Run fast real-time analytics Achieve high-availability with recovery from node failure Manages itself Easily migrate from MySQL as it is nearly plug-and-play compatible, use MySQL drivers, tools and replication. While I was going through the documentation I realized that ClustrixDB also has extensive support for SQL features including complex queries involving joins on a dozen or more tables, aggregates, sorts, sub-queries. It also supports stored procedures, triggers, foreign keys, partitioned and temporary tables, and fully online schema changes. It is indeed a very matured product and SQL solution. Indeed Clusterix sound very promising solution, I decided to dig a bit deeper to understand who are current customers of the Clustrix as they exist in the industry for quite a few years. Their client list is indeed very interesting and here is my quick research about them. Twoo.com – Europe’s largest social discovery (dating) site runs 4.4 Billion Transactions a day with table sizes over a Terabyte, on a 168 core cluster. EngageBDR – Top 3 in the online advertising category uses ClustrixDB to serve 6.9 billion ads a day through real-time bidding platform. Their reports went from 4 hours to 15 seconds. NoMoreRack – Top 2 fastest growing e-commerce company in US used ClustrixDB for high availability and fast growth through Amazon cloud. MakeMyTrip – India’s leading travel site runs on ClustrixDB with two clusters running as multi-master in Chennai and Bangalore. Many enterprises such as AOL, CSC, Rakuten, Symantec use ClustrixDB when their applications need scale. I must accept that I am impressed with the information I have learned so far and now is the time to do some hand’s on experience with their product. I want to learn this technology so in future when it is about NewSQL, I know what I am talking about. Read more why Clustrix explains why you ClustrixDB might be the right database for you. Download ClustrixDB with me today and install it on your machine so in future when we discuss the technical aspects of it, we all are on the same page. The software can be downloaded here. Reference : Pinal Dave (http://blog.SQLAuthority.com)Filed under: Big Data, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Clustrix

    Read the article

  • nvidia-settings makes one of my dual monitors grey and useless, disables network

    - by Kerrick
    I'm running Ubuntu 12.04 64-bit, Precise Pangolin, with a PNY GTS 250 1GB video card and a monitor plugged into each of the DVI ports. I'm using the proprietary drivers (post-release updates). If I set anything to do with Separate X Screens up in nvidia-settings (and write it to xorg.conf and reboot), my second monitor has a grey background, no menu bar, no ability to have a window on it, the second monitor doesn't get picked up in a screneshot, and if I move my mouse cursor to it it's an ugly black X. Plus, my network is unable to connect to anything. If I subsequently delete /etc/X11/xorg.conf and reboot, everything goes back to working, albeit with a single monitor activated. If I set anything to do with TwinView up in nvidia-settings, my second monitor starts working, but it isn't seen as a second monitor by Ubuntu, so I can't apply color calibration to it separately. Plus, my mouse gets "caught" between the monitors every time I try to move my cursor between the two. What gives? If it helps, this is the xorg.conf that nvidia-settings generates for Separate X Screens.

    Read the article

  • Using powershell call native command-line app and capture STDERR

    - by crtracy
    I'm using a port of a cygwin tool on Windows which writes normal status messages to STRERR. This produces ugly output when run from PowerShell: PS> dos2unix.exe -n StartApp.sh StartApp_fixed.sh dos2unix.exe : dos2unix: converting file StartEC3.sh to file StartEC3_fixed.sh in UNIX format ... At line:1 char:13 + dos2unix.exe <<<< -n StartApp.sh StartApp_fixed.sh + CategoryInfo : NotSpecified: (dos2unix: conve...UNIX format ...:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError Is there a better way? P.S. I intend to post one solution I've found and compare it to answers from others.

    Read the article

  • Using PreApplicationStartMethod for ASP.NET 4.0 Application to Initialize assemblies

    - by ChrisD
    Sometimes your ASP.NET application needs to hook up some code before even the Application is started. Assemblies supports a custom attribute called PreApplicationStartMethod which can be applied to any assembly that should be loaded to your ASP.NET application, and the ASP.NET engine will call the method you specify within it before actually running any of code defined in the application. Lets discuss how to use it using Steps : 1. Add an assembly to an application and add this custom attribute to the AssemblyInfo.cs. Remember, the method you speicify for initialize should be public static void method without any argument. Lets define a method Initialize. You need to write : [assembly:PreApplicationStartMethod(typeof(MyInitializer.InitializeType), "InitializeApp")] 2. After you define this to an assembly you need to add some code inside InitializeType.InitializeApp method within the assembly. public static class InitializeType {     public static void InitializeApp()     {           // Initialize application     } } 3. You must reference this class library so that when the application starts and ASP.NET starts loading the dependent assemblies, it will call the method InitializeApp automatically. Warning Even though you can use this attribute easily, you should be aware that you can define these kind of method in all of your assemblies that you reference, but there is no guarantee in what order each of the method to be called. Hence it is recommended to define this method to be isolated and without side effect of other dependent assemblies. The method InitializeApp will be called way before the Application_start event or even before the App_code is compiled. This attribute is mainly used to write code for registering assemblies or build providers. Read Documentation I hope this post would come helpful.

    Read the article

  • Oracle Tutor: Document Audit and Maintenance

    - by Emily Chorba
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Perhaps the most critical phase in the process of documenting policies and procedure -- and the greatest challenge to owners -- is the maintenance of published documents. Documents must reflect current practice and they must be accurate. The most effective way to ensure this is through the regular audit of documents. In the Tutor environment, a Document Owner must audit each of his/her documents once every 6 to 12 months to verify that the document reflects actual practice. If it does not, the document is updated or employees are retrained (depending on the nature of the discrepancy). If a document update is required, the Tutor system enables the owner to modify and redistribute the document within one work day. This is possible because: Documents contain a minimum of detail, thereby reducing the edits. Document format and structure are simple, so changes are easy to identify The Tutor Author software tool enables the Document Owner or the Document Administrator to update the file quickly. The Document Administrator verifies the document format and integration, publishes the document, and distributes it to all affected employees, thereby freeing the Document Owner of the more tedious tasks. Learn More For more information about Tutor, visit Oracle.Com or the Tutor Blog. Post your questions at the Tutor Forum. Emily Chorba Principle Product Manager Oracle Tutor & UPK

    Read the article

  • Using prefix incremented loops in C#

    - by KChaloux
    Back when I started programming in college, a friend encouraged me to use the prefix incrementation operator ++i instead of the postfix i++, citing that there was a slight chance of better performance with no real chance of a downside. I realize this is true in C++, and it's become a general habit that I continue to do. I'm led to believe that it makes little to no difference when used in a loop in C#, regardless of data type. Apparently the ++ operator can't be overridden. Nevertheless, I like the appearance more, and don't see a direct downside to it. It did astonish a coworker just a moment ago though, he made the (fairly logical) assumption that my loop would terminate early as a result. He's a self-taught programmer, and apparently never came across the C++ convention. That made me question whether or not the equivalent behavior of pre- and post-fix increment and decrement operators in loops is well known enough. Is it acceptable for me to continue using ++i in looping constructs because of style preference, even though it has no real performance benefit? Or is it likely to cause confusion amongst other programmers? Note: This is assuming the ++i convention is used consistently throughout all code.

    Read the article

  • GWB | Comment Spam On The Rise

    - by Geekswithblogs Administrator
    I don’t know a member on Geekswithblogs.net that is not frustrated with the amount of spam they get. It is a major problem that we have been dealing with for 6+ years and trying to come up with new ways to fight. As spammers get smarter, we have to continue to upgrade the tools we use to combat it. Just like any spam filter, sometimes good comments will get caught up. This has been a huge concern for some bloggers causing us to tame what we call spam and not spam. So this post is here just to state we know the spam problem is like a wave, sometimes it is not so bad, other times it gets worse. Right now it is worse. One measure we will take is a requirement for CAPTCHA soon if it continues since most members don’t clean up their spam via the admin tools (which are not the best tools, I know). Also I want to solicit a better approach from the members, what would you like the spam interface on GWB to be like? Be realistic cause we all want “Zero Spam, Good Comment live”. Related Tags: Geekswithblogs.net, Spam

    Read the article

  • You Can&rsquo;t Upload An Empty File To SharePoint 2007 Or SharePoint 2010

    - by Brian Jackett
    The title of this post is pretty self explanatory, but I thought it worth mentioning since I had never run across this rule until just recently.  A few weeks ago I was testing out a new workflow attached to a SharePoint 2007 document library.  I uploaded various file types to ensure all were handled properly.  One of the files I happened to test with was an empty .txt file to which I got the following error.      As you can see from the error message you aren’t allowed to upload a file that is empty.  Fast forward to this week when I was doing some research for my upcoming SharePoint 2010 beta exams.  I remembered that error I got a few weeks ago and decided to try out with SharePoint 2010 as well.  No surprises I got a similar error. Conclusion     Next time you are uploading files to a SharePoint 2007 or 2010 document library, make sure the file is not empty.  Coincidentally when I tweeted about this issue a few friends replied that they had also found this error recently.  I don’t know the internal reasoning why this is prevented but I assume it has something to do with how the blob for the file is stored in the database.  I assume that this would still be the case even if you had Remote Blob Storage (RBS) configured for your farm, but don’t have access to such a farm to confirm.  If anyone reading this does have access and wants to confirm that would be appreciated, just leave a comment.         -Frog Out

    Read the article

  • Can I run AD commands from a standard PowerShell script?

    - by Ben
    I am putting together a script to run post-sysprep. It should check if the machine is on the network, and if it is then it should query AD to see if a computer account exists with it's service tag (we're using these as the hostnames of the machines.) If it does exist, it should delete the account and rejoin the machine to the domain. I have got the majority of the script running, but need to run the following: Remove-ADComputer -Identity $distinguishedName How can I run this from the "standard" powershell environment? I don't want to use the AD module. (By the way - I'm on a mixed mode 2000/03 domain as we are in the process of upgrading to 2008) I'm new to PowerShell so be gentle if I'm completely missing the point! Thanks, Ben

    Read the article

  • 1.5 TB USB Drive failed to mount

    - by user89348
    Seagate 1.5Tb FreeAgent USB Hard Drive. Formatted FAT32. I figure it is 75% full. Used to work fine in XUBUNTU it shows up in Cairo Dock but when I click on it I get "failed to mount drive'. Nautilus does not display the icon nor does Thunar. Windows Vista will no longer recognise drive either. Back Track 5R3 also no longer fails to mount it. BUT and here is the BIG BUT my Pioneer DV-410 reads the files and plays the everything just fine. I believe this all happened after an unclean shutdown / XUbuntu 12.10 system freeze. Why can't XUBUNTU mount this drive when a crappy 13 year old DVD player can mount it. I am desperate to back up the data before just in case the drive becomes completely unreadable. Using XUBUNTU 12.10 Quantal current 3.5.0.17 Kernel (past 3 Kernels wont read it either) and all newest apt-get update / dist-upgrade are applied. I will post any other info you folks request as needed. Additional info as requested by githlar. $ sudo fsck.vfat /dev/sdb dosfsck 3.0.13, 30 Jun 2012, FAT32, LFN Read 512 bytes at 0:Input/output error $ lsusb Bus 001 Device 003: ID 148f:3070 Ralink Technology, Corp. RT2870/RT3070 Wireless Adapter Bus 002 Device 002: ID 0bc2:3001 Seagate RSS LLC Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub

    Read the article

  • Managing Kindle Fire with on 12.04 via Micro-USB

    - by pirtle
    To begin, I have read both Is there a way to get a Kindle Fire to work with 12.04? and How can I transfer files to a Kindle Fire with a Micro-USB cable? My problem is that I am unable to mount my Kindle Fire in order to add books to it. I have installed calibre, but it is unable to manage any devices until the computer itself has recognized it. The latter post had an excellent answer (provided by @jeremiah) that was making some progress. Unfortunately, I think I don't know enough about the -t flag used with mount. This is what I've done... Ran dmesg to locate the device: [ 3.920886] sd 6:0:0:0: [sdb] Attached SCSI removable disk Confirmed it's location: $ sudo ls -l /dev/disk/by-id lrwxrwxrwx 1 root root 9 Aug 18 15:52 usb-Amazon_Kindle_3C6C002600000001-0:0 -> ../../sdb So we know that my Kindle is recognized on /dev/sdb. I then used the mount command suggested by @jeremiah: $ sudo mount -t ext3 /dev/sdb/ /mnt/kindle/ mount: no medium found on /dev/sdb The same error occurs for sudo mount /dev/sdb /mnt/kindle. Note: I have created the 'kindle' directory in 'mnt' Any suggestions?

    Read the article

  • Impossible to select folders and files with mouse (Ubuntu 12.04)

    - by François
    First-time post for me here (after being a regular reader for two years though) so thank you all for the quality of replies and help provided. My problem is very simple apparently but a tricky one. I just installed the Ubuntu 12.04(1) along with the Gnome3-shell environment on my new pc desktop Acer Aspire X3995 (see config below). Everything work (more or less) so far (I still have problems of sound and disabled 2-fingers gestures with my screen -- which I will have to deal with xconfig settings I think -- though), but the main problem is that I cannot select files/folders with my USB mouse. When I try to double click on them, nothing happen (sometimes one folder or file is selected but then unselected again). Note that the navigation works perfectly from the USB keyboard and from the touch-screen (I am using a 23" wide touch-screen Acer Monitor T231Hbmid). Also, the mouse works perfectly with other menu navigation, with the only difference that the text of certain menus is selected as if I was holding the left click on them. So I assume the problem is only related to the mouse. Needless to say that the usual basic hardware checks have been performed (unplugging, powered-off, etc.). My level is simply "advanced user", meaning that if you provide me with intelligible input I should find my way, but please don't expect too much technical/specific knowledge... :) Please let me know if you need more information on this bug. Now, fingers crossed... and thanks in advance! Ciao, François Config of Acer Aspire X3995: Ubuntu 12.04 / Gnome3-shell environment / Intel Core i5 3450 / nVidia GeForce 605, 1Gb. Screen: Acer Monitor TFT 23" wide T231Hbmid

    Read the article

  • Standard ratio of cookies to "visitors"?

    - by Jeff Atwood
    As noted in a recent blog post, We see a large discrepancy between Google Analytics "visitors" and Quantcast "visitors". Also, for reasons we have never figured out, Google Analytics just gets larger numbers than Quantcast. Right now GA is showing more visitors (15 million) on stackoverflow.com alone than Quantcast sees on the whole network (14 million): Why? I don’t know. Either Google Analytics loses cookies sometimes, or Quantcast misses visitors. Counting is an inexact science. We think this is because Quantcast uses a more conservative ratio of cookies-to-visitors. Whereas Google Analytics might consider every cookie a "visitor", Quantcast will only consider every 1.24 cookies a "visitor". This makes sense to me, as people may access our sites from multiple computers, multiple browsers, etcetera. I have two closely related questions: Is there an accepted standard ratio of cookies to visitors? This is obviously an inexact science, but is there any emerging rule of thumb? Is there any more accurate way to count "visitors" to a website other than relying on browser cookies? Or is this just always going to be kind of a best-effort estimation crapshoot no matter how you measure it?

    Read the article

  • The NEW Oracle Enterprise Manager Extensibility Exchange

    - by Joe Diemer
    Oracle Enterprise Manager continues to expand its Eco-system with the NEW Extensibility Exchange! The Exchange offers a searchable listing of Enterprise Manager entities. Today it’s stocked with plug-ins and connectors for Enterprise Manager 12c and 11g. Anyone - partners, customers, ACE community members, anyone - can post an entity subject to approval of course. So in addition to plug-ins and connectors, the Exchange will have best practices, deployment procedures, templates, and essentially any Enterprise Manager entity that’s relevant. The Exchange provides Development Resources to guide contributors in the creation of plug-ins and connectors. A Community Resources page features plug-ins validated through the Oracle Validate Integration program as well as some other contributions important to customers.  You can also discover ways to get more involved with Enterprise Manager through the user and partner communities. The Exchange was announced in the October 2nd Enterprise Manager Partner Press Release  and is being presented at Oracle OpenWorld 2012 during the following sessions:    •    “Using Oracle Enterprise Manager to Manage Your Own Private Cloud” General Session – Tuesday Oct 2nd    •    “Managing Heterogeneous Environments with Oracle Enterprise Manager” Conference Session – Tuesday Oct 2nd    •    “Using Management Already Built into Oracle Products: Oracle Enterprise Manager” Oracle Partner Network Exchange Session – Wednesday Oct 3rd Check it out at http://www.oracle.com/goto/emextensibility, and let us know what you think by posting a comment below or clicking the "Forum" button at the Exchange itself.

    Read the article

  • Why does video playback lag/freeze when I go into full-screen mode?

    - by RanRag
    When I try to play my video files in SMPlayer it works fine but as soon as I switch to fullscreen mode(16:9) following thing happens: 1) Video starts lagging. 2) Audio and video goes out of sync. 3) CPU usage rises to ~50%. 4) SMPlayer starts to hang. My current SMPlayer configuration: 1)Video Output Driver = x11(slow) 2)Audio Output Driver = alsa(0.0-HDA Intel) 3)Cache = 8192 KB 4)Threads for decoding(MPEG-1/2 and H.264 only = 2 Things I tried solve this problem: 1) Tried changing video o/p driver to xv,gl. 2) Tried changing audio o/p driver to pulse. 3) Tried increasing cache size and also tried using nocache. Everything works fine on windows but I don't want to switch to windows just to play video files. My system config: Acer Aspire One D270 Atom N2600(Cedar Trail) 1.6GHz 2GB Memory Intel GMA 3600 graphics. Ubuntu 12.04 Kernel Release: 3.2.0-23-generic-pae Rest all things are working fine I have no resolution issue, bluetooth, wireless also working fine. Just ask me to submit any other log file I will be happy to post. SMPlayer log MPlayer Terminal output Codec Information(currently playing file):

    Read the article

< Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >