Search Results

Search found 29938 results on 1198 pages for 'version hunter'.

Page 552/1198 | < Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >

  • Higher screen resolution in VirtualBox?

    - by pelms
    I've just installed Ubuntu 10.04 into VirtualBox on Windows 7. Unfortunately the only options showing for screen resolution are 640x480 and 800x600 and the monitor is showing as 'Unknown'. How would I go about upping the resolution to 1280x1024 (I'm on a 1600x1200 monitor)? Update I tried mounting the VirtualBox 'Guest Additions' ISO (from the VBox 'Devices' menu) and doing sudo sh ./VBoxLinuxAdditions-x86.run from the mounted drive, which gave 2 new listed resolutions after a reboot (1024x768 and the 16:9 version of that resolution). These worked when I selected them but disappeared when I switched back to another resolution. I tried rebooting and running VBoxLinuxAdditions-x86.run again but onlu the 2 low res options listed this time. I think I'm going to reinstall... Seems to be a VBox problem rather than an Ubuntu problem as after reinstalling 10.4 overwriting the original virtual partition, sudo sh ./VBoxLinuxAdditions-x86.run now has no affect at all.

    Read the article

  • Trying to update Asus BIOS: FreeDOS crashes

    - by ZekeDroid
    My UX31 zenbook is experiencing some weird shutdown behavior when the battery drops below 50% and the internet seems to agree that updating the BIOS is a good step forward since there were issue with the kernel before. I downloaded both the correct BIOS file and the windows 7 utility tool and now need to boot FreeDOS to run, however, I've tried every method out there and they all fail (or so I think): Using unetbootin's FreeDOS 1.0 image I get to an error saying it couldn't run drivers then I get to a command line on disk A:. I assumed a dead end. Using unetbootin but with the FreeDOS 1.1 version image downloaded directly: get an error of "bad or missing command interpreter". I looked online and the solutions didn't work either. So, is there an alternative to FreeDOS or to installing a BIOS that I could use?

    Read the article

  • The best AutoCAD 2010 Ubuntu alternative?

    - by onvas
    I'm looking for the best AutoCAD 2010 alternative for Ubuntu. Wine's support for the 2010 version isn't polished so I'm looking for Linux based similar programs. I know that this can be subjective so I'd like to know what's the best Ubuntu alternative which has the most similar, and significant features as that of AutoCAD 2010? I'm not familiar with the program because I'm researching this for my sister who is studying Aeronautical Engineering. Any help is appreciated. I'm using 12.04 64-bit on my ThinkPad R61i with 3.8GB memory and 160GB hard drive.

    Read the article

  • Speakers don't work in 12.10 but they work fine on windows7

    - by giri
    I have recently upgraded my Ubuntu 12.04 to 12.10 version and find issues with my speakers as well as microphone. When I boot the system they don't work, but(don't know why) when I restart once or twice they work fine. There is no problem with my laptop(dell xps) as they work well on windows7. I have my sound settings as follows Hardware --- Built-in Audio 1 Outpu/1 Input Analog Stereo Duplex Input(Internal Microphone) & Output(Speakers) -----Built-in audio Analog Stereo Any suggestions to fix the problem??

    Read the article

  • PrestaShop install SQL error

    - by Steve
    I am trying to install PrestaShop 1.4.0.17, and reach Step 3. I enter database information, which tests okay, and I choose the second option: Full mode: includes 100+ additional modules and demo products (FREE too!). I choose Next, and receive the error: Error while inserting data in the database: ‘CREATE TABLE `shop_county_zip_code` ( `id_county` INT NOT NULL , `from_zip_code` INT NOT NULL , `to_zip_code` INT NOT NULL , PRIMARY KEY ( `id_county` , `from_zip_code` , `to_zip_code` ) ) ENGINE=’ You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near \’\’ at line 6(Error: : 1064) This happens if I use either MyISAM, or InnoDB. Why is this happening? This also happens if I drop all database tables, and try again in simple mode. Is there a manual installation method?

    Read the article

  • First time user here

    - by Brian
    Never used Linux before but I decided I want to start somewhere and Ubuntu seemed like the right place to start. I burned the 64bit version iso onto a CD and installed it onto a fresh new hard drive I got and it installed nicely or so I thought. First major problem was the fact that the screen slip oddly, second when I tried to log in everything just kind of froze, I could still move the cursor but thats it. I'm not too tech savy but I can follow instructions and any help given would be greatly appreciated. I am considering dual booting it with my other hard drive that has windows 7 on it but I'm afraid I might mess that up, plus if I do it that way I wouldn't know how to get rid of Ubuntu if I decide its not for me.

    Read the article

  • vlc issue with unity

    - by bob
    I'm using ubuntu 12.04 with unity 3d. I am having troubles to lock vlc (version: VLC media player 2.0.1 Twoflower (revision 2.0.1-0-gf432547)) to the left panel. It doesn't behave the same way other applications do. This is a fresh install from the repo, I didn't change any settings. Ubuntu 12.04 is also a fresh install. For example, (left) clicking on it doesn't bring it to front, but creates a new instance (this in particular is very annoying, if it gets behind another window, that happens to be maximized, there is no way to bring it back up). It also doesn't have a right arrow to say the number of instances there is. Any way around that ?

    Read the article

  • Problem installing virtualbox on ubuntu 9.04

    - by debanjan
    I'm using ubuntu 9.04. From last few days I'm trying to install a virtualbox,but i can't. I went to this link for virtualbox. https://www.virtualbox.org/wiki/Download_Old_Builds_3_0 Found the virtualbox for my version. After downloading it in xp i restarted my pc and opened ubuntu. While installing by package installer there is a problem. But the status giving a massage everytime I'm trying to install.it saying "Error: Dependency is not satisfiable: libcurl3 (=7.16.2-1)". Here is an icon named "install package" but its hidden. I don't know what's the problem.I'm new to ubuntu,pls try to help me.

    Read the article

  • JBoss Application Server 6 disponible, le serveur d'application Java de Red Hat offre le support complet de Java EE 6

    JBoss Application Server 6 disponible Le serveur d'application Java de Red Hat offre le support complet de Java EE 6 La nouvelle version de JBoss, le serveur d'application Java est disponible. Il s'agit de l'un des premiers serveurs à offrir un support complet et prêt pour la production de Java Entreprise Edition 6 (JEE 6), la spécification du langage Java qui peine encore a se faire une place dans les entreprises. JBoss est un projet open-source gratuit, racheté et mené depuis 2006 par Red Hat, qui offre aussi un support payant dans le cadre du package JBoss Enterprise Middleware et de JBoss Enterprise Application Platform. Pour mémoire, GlassFish, ...

    Read the article

  • Free Training - Building Silverlight Business Applications

    We recently released a new free Silverlight 4 training kit that walks you through building business applications with Silverlight 4. You can also download the entire offline version of the kite here.  You can use the 8 modules, 25 videos, and several hands on labs online or offline from links on the Channel 9 site. Ive included a breakdown and links to all of the content here in this post. The key to this training material is not the features it covers (though it covers a variety of topics including...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How do I create a 12.04 LiveUSB for a non-PAE machine?

    - by DrSkylaser
    I'm trying to create a dual-boot laptop with Ubuntu 12.04 and some flavor of Windows (TBD). To do that, I need to do some work on partitions & install 12.04. To do that, I need to create a bootable USB that will work with my non-PAE-capable CPU. Someone pointed me to a mini.iso that was allegedly non-PAE-friendly, but it gave me the same error as the straightup 12.04 desktop ISO. What version do I actually need? (This isn't going to be a virtual machine--I don't think the laptop has the RAM to handle that happily--so enabling PAE in the virtual machineware doesn't help me.)

    Read the article

  • CMake can not find PythonLibs

    - by tintin
    I am trying to build inria Graphite on my ubuntu which is running in a VirtualBox simulator, I follow the instructions, and install the python-dev packages, but when I run cmake , still got an error: CMake Error at /usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:108 (message): Could NOT find PythonLibs (missing: PYTHON_LIBRARIES PYTHON_INCLUDE_DIRS) (Required is at least version "3.2") Call Stack (most recent call first): /usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:315 (_FPHSA_FAILURE_MESSAGE) /usr/share/cmake-2.8/Modules/FindPythonLibs.cmake:208 (FIND_PACKAGE_HANDLE_STANDARD_ARGS) src/packages/OGF/gel_python3/CMakeLists.txt:11 (FIND_PACKAGE) I checked the /usr/lib/ and find tintin@tintin-VirtualBox:/usr/lib$ find . -name "libpython*" ./x86_64-linux-gnu/libpython3.4m.so.1.0 ./x86_64-linux-gnu/libpython2.7.so.1.0 ./x86_64-linux-gnu/libpython3.4m.a ./x86_64-linux-gnu/libpython2.7.a ./x86_64-linux-gnu/libpython3.4m.so ./x86_64-linux-gnu/libpython2.7.so ./x86_64-linux-gnu/libpython2.7.so.1 ./x86_64-linux-gnu/libpython3.4m.so.1 so why cmake can not find the PythonLibs, or how should I deal with this?

    Read the article

  • Tip #19 Module Private Visibility in OSGi

    - by ByronNevins
    I hate public and protected methods and classes.  It requires so much work to change them in a huge project like GlassFish.  Not to mention that you may well have to support those APIs forever.  They are highly overused in GlassFish.  In fact I'd bet that > 95% of classes are marked as public for no good reason.  It's just (bad) habit is my guess. private and default visibility (I call it package-private) is easier to maintain.  It is much much easier to change such classes and methods around.  If you have ANY public method or public class in GlassFish you'll need to grep through a tremendous amount of source code to find all callers.  But even that won't be theoretically reliable.  What if a caller is using reflection to access public methods?  You may never find such usages. If you have package private methods, it's easy.  Simply grep through all the code in that one package.  As long as that package compiles ok you're all set.  There can' be any compile errors anywhere else.  It's a waste of time to even look around or build the "outside" world.  So you may be thinking: "Aha!  I'll just make my module have one giant package with all the java files.  Then I can use the default visibility and maintenance will be much easier.  But there's a problem.  You are wasting a very nice feature of java -- organizing code into separate packages.  It also makes the code much more encapsulated.  Unfortunately to share code between the packages you have no choice but to declare public visibility. What happens in practice is that a module ends up having tons of public classes and methods that are used exclusively inside the module.  Which finally brings me to the point of this blog:  If Only There Was A Module-Private Visibility Available Well, surprise!  There is such a mechanism.  If your project is running under OSGi that is.  Like GlassFish does!  With this mechanism you can easily add another level of visibility by telling OSGi exactly which public you want to be exposed outside of the module.  You get the best of both worlds: Better encapsulation of your code so that maintenance is easier and productivity is increased. Usage of public visibility inside the module so that you can encapsulate intra-module better with packages. How I do this in GlassFish: Carefully plan out at least one package that will contain "true" publics.  This is the package that will be exported by OSGi.  I recommend just one package. Here is how to tell OSGi to use it in GlassFish -- edit osgi.bundle like so:-exportcontents:     org.glassfish.mymodule.truepublics;  version=${project.osgi.version} Now all publics declared in any other packages will be visible module-wide but not outside the module. There is one caveat: Accessing "module-private" items outside of the module is controlled at run-time, not compile-time.  The compiler has no clue that a public in a dependent module isn't really public.  it will happily compile it.  At runtime you will definitely see fireworks.  The good news is that you don't have to wait for the code path that tries to use the "module-private" items to fire.  OSGi will complain loudly when that module gets loaded.  OSGi will refuse to load it.  You will see an error like this: remote failure: Error while loading FOO: Exception while adding the new configuration : Error occurred during deployment: Exception while loading the app : org.osgi.framework.BundleException: Unresolved constraint in bundle com.oracle.glassfish.miscreant.code [115]: Unable to resolve 115.0: missing requirement [115.0] osgi.wiring.package; (osgi.wiring.package=org.glassfish.mymodule.unexported). Please see server.log for more details. That is if you accidentally change code in module B to use a public that is really a "module-private" in module A, then you will see the error immediately when you try to test whatever you were changing in module B.

    Read the article

  • Problem Trying to Install ROOT (by CERN) on Ubuntu 11.04 i386

    - by Jose Luis
    I hope you can help me with this problem I am trying to install root in my computer, but I have a problem and I don't know what to do to solve it I've downloaded the tar file with the root version that I want to install I've extracted the files in the tar file I've run the configure program succesfully, but when I run "make" command I get this result: cp /root/root/core/utils/src/RClStl.cxx core/utils/src/RClStl_tmp.cxx bin/rmkdepend -R -fcore/utils/src/RClStl_tmp.d -Y -w 1000 -- -pipe -m32 -Wall -W -Woverloaded-virtual -fPIC -Iinclude -DR__HAVE_CONFIG -pthread -UR__HAVE_CONFIG -DROOTBUILD -I/root/root/core/utils/src -D__cplusplus -- core/utils/src/RClStl_tmp.cxx g++ -O2 -pipe -m32 -Wall -W -Woverloaded-virtual -fPIC -Iinclude -DR__HAVE_CONFIG -pthread -UR__HAVE_CONFIG -DROOTBUILD -I/root/root/core/utils/src -o core/utils/src/RClStl_tmp.o -c core/utils/src/RClStl_tmp.cxx In file included from core/utils/src/RClStl.h:28:0, from core/utils/src/RClStl_tmp.cxx:16: core/utils/src/Scanner.h:16:27: fatal error: clang/AST/AST.h: No existe el fichero o el directorio compilation terminated. make: * [core/utils/src/RClStl_tmp.o] Error 1 rm core/utils/src/RClStl_tmp.cxx I don´t know what to do Please, help me thank you in advance

    Read the article

  • Visit our Consolidated List of Mandatory Project Costing Code and Data Fixes

    - by SherryG-Oracle
    Projects Support has a published document with a consolidated listing of mandatory code and data fixes for Project Costing.  Generic Data Fix (GDF) patches are created by development to fix data issues caused by bugs/issues in the application code.  The GDF patches are released for download via My Oracle Support which are then referenced in My Oracle Support documents and by support to provide data fixes for known code fix issues.Consolidated root cause code fix and generic data fix patches will be superceded whenever any new version is created.  These patches fix a number of critical code and data issues identified in the Project Costing flow.This document contains a consolidated list of code and data fixes for Project Costing.  The note lists the following details: Note ID Component Type (code or data) Abstract Patch Visit DocID 1538822.1 today!

    Read the article

  • Chrome 10 rend possible l'exécution d'applications Web en arrière plan, Google publie un exemple

    Chrome 10 rend possible l'exécution d'applications Web en arrière plan Même quand le navigateur est fermé, Google publie un exemple Mise à jour du 24/02/11 par Gordon Fowler Google vient de dévoiler une nouvelle fonctionnalité disponible dans la version 10 (en beta) de son navigateur Chrome. La fonctionnalité, baptisée « Background Pages », bien que n'ayant pas été mise en avant lors de la sortie Chrome 10, est bel et bien là. Elle permet d'exécuter des pages Web en arrière-plan de façon totalement transparente pour l'utilisateur. Certaines applications (qualifiées « d'applications d'arrière plan ») peuvent ainsi continuer à tourn...

    Read the article

  • remove ssl from Google search results

    - by user73457
    I am the webadmin of a WordPress site that serves up http pages statically. The problem is that some of the pages are shown as https in Google search results. For instance, if the search term "Example Press Kit" is entered the search result site link comes up as: https://example.com/presskit/ We don't have a site ssl certificate, so surfers are being bounced. I have tried everything. Most recently I created a new website in Google WebAdmin for the https version of our home page. Then, I added sitelinks that should have redirected site links intended for https://example.com/* to http://example.com/*. But it doesn't work! Google still shows a dead link to http://example.com/presskit. I didn't think dead links lasted very long on Google results, but there they are, two weeks later. Any ideas?

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • I Don't Understand Anything About Randomly Generated Worlds [closed]

    - by Alex Larsen
    What tools do I need to make a Minecraft-like generated world? I heard about Perlin noise and Simplex, but I don't understand anything about them. So far all I found on the internet was a Simplex version for C#, and all it has is functions, and this is what I get: Console.WriteLine(Noise.Generate(SomeNumber, SomeNumber, SumNumber)); Outputs random floats. I'm really lost. I don't understand the whole random generated worlds concept. Can someone help me? And if I use the noise thing I don't understand how to use it.

    Read the article

  • Reduce boot time between grub menu and login screen

    - by Sudheer
    I use Ubuntu 14.04 LTS version which used to boot fast at beginning but not i loads very slow. I searched for this but can't find suitable answers. so i want to reduce my boot time which is now around 1min 12sec (boot chart) overall but i noticed its taking a longtime after grub menu and before login screen. A Blank screen appears after grub waiting... then login screen appears. I want to know a way to reduce that blank screen time(or if possible remove) and get login screen as fast as possible. I already removed several of my startup applications. Getting desktop after log-in is fast. I don't want to remove unity and install light desktop envinorments like Xfce and Lxde. Here is my boot-chart image Thanks in advance

    Read the article

  • Cannot get Atheros AR9285 to work on 12.10

    - by user100449
    I've already went through all possible advices and still cannot start my Atheros AR9285 wireless card. I have a laptop Toshiba Portege Z830 where the WiFi already worked under Windows 7. But after migration on Ubuntu 12.10. I'm not able get it work. This is what I see on command lshw *-network UNCLAIMED description: Network controller product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:02:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:c0500000-c050ffff This is what I see on command rfkill list 0: Toshiba Bluetooth: Bluetooth Soft blocked: yes Hard blocked: no 1: hci0: Bluetooth Soft blocked: yes Hard blocked: no Any idea?

    Read the article

  • How mature is FreeBASIC?

    - by David
    A friend of mine is considering using FreeBASIC in a critical production environment. They currently use GWBasic, and they want to make a soft transition towards more modern languages. I am just worried that there might be undetected bugs in the software. I see that their version number is 0.22.0, which indicates, that it is not quite mature yet. I also read this discussion, without being able to conclude. Also on their Sourceforge pages there is no indication of whether it is Alpha or Beta (which anyways is not a very good indicator). Does anyone have own experience about the maturity, ideas on how to judge the maturity, or know of companies using FreeBASIC in a critical production environment?

    Read the article

  • Les processeurs Clover Trail seront compatibles avec Linux et Android, un porte-parole d'Intel le confirme dans un email

    Les processeurs Clover Trail seront finalement compatibles Linux et Android Un porte-parole d'Intel le confirme dans un email Clover Trail est la lignée de processeurs qui représente la nouvelle génération d'Intel Atom. L'entreprise avait annoncé dans la récente Intel Developer Conference que ces processeurs seraient exclusifs à Windows 8. [IMG]http://idelways.developpez.com/news/images/Intel-Logo.jpg[/IMG] Mais dans un email envoyé par le porte-parole d'Intel Kathryn Gill, l'entreprise affirme avoir changé de stratégie avec « des plans pour une autre version de cette plateforme [Trail Cover] pour Linux et Android ; cependant, nous ne commentons pas pour l...

    Read the article

  • Programmer logbook application?

    - by jsoldi
    I've just released my application to the public, and I'm working on an updated version, but I really think I should keep track of ALL the code changes. In case some functionality suddenly starts failing, with a history of all the changes I made it would be a lot easier to figure out where I messed it up, in case the problem wasn't already there. The ideal would be to have a super fast computer with a huge hard drive and an application that automatically saves a backup of the whole project every time I change a line in the code, with some file comparison tool that would show me every difference between any two backed up projects, but that's not really possible for now. So, do you know any application that makes it easy for a programmer to keep track of the changes made to the source code?

    Read the article

  • How do I stop Google indexing my main page as https [duplicate]

    - by user2897488
    This question already has an answer here: https:// search results appearing on Google for purely http:// site 2 answers Due to historic reasons, we have things set up so that "www.mydomain.com" redirects to "store.mydomain.com". This has worked perfectly fine until recently, when Google appears to be sending visitors to "https:// www.mydomain.com" which doesn't have an SSL-certificate (and never has). Strangely, its only the first link that goes to "https:// www.mydomain.com", all other links point correctly to "http:// store.mydomain.com". Because there is no certificate on the "www" version, users are getting an error message. How do I make Google revert to pointing the main link at "http:// store.mydomain.com" (or even "http:// www.mydomain.com.") If I remove "https:// www.mydomain.com" from Google webmaster tools, will this also remove the redirected page ("http:// store.mydomain.com)? Thanks.

    Read the article

< Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >