Search Results

Search found 26179 results on 1048 pages for 'linux from scratch'.

Page 726/1048 | < Previous Page | 722 723 724 725 726 727 728 729 730 731 732 733  | Next Page >

  • Create set of random JPGs

    - by Kylar
    Here's the scenario, I want to create a set of random, small jpg's - anywhere between 50 bytes and 8k in size - the actual visual content of the jpeg is irrelevant as long as they're valid. I need to generate a thousand or so, and they all have to be unique - even if they're only different by a single pixel. Can I just write a jpeg header/footer and some random bytes in there? I'm not able to use existing photos or sets of photos from the web. The second issue is that the set of images has to be different for each run of the program. I'd prefer to do this in python, as the wrapping scripts are in Python. I've looked for python code to generate jpg's from scratch, and didn't find anything, so pointers to libraries are just as good.

    Read the article

  • Are there any Open source Admin Panel based on PHP and jQuery?

    - by Pablo
    I've try many CMS flavors like MODx, Drupal, Joomla for the admin Panel but they can't seem to manage my existing data, like the users. Im planing on building a admin Control Panel for my site and i was wondering if there is something out there which i can start working with instead of starting from scratch. UPDATE: To do simple tasks like: Managing my users Managing my content( the site is an image sharing site, so in this case images) I was looking more into a graphical interface more then a system as i have lots of costume content and data like the users.

    Read the article

  • Jython project in Eclipse can't find the xml module, but works in an identical project

    - by Rob Lourens
    I have two projects in Eclipse with Java and Python code, using Jython. Also I'm using PyDev. One project can import and use the xml module just fine, and the other gives the error ImportError: No module named xml. As far as I can tell, all the project properties are set identically. The working project was created from scratch and the other comes from code checked out of an svn repository and put into a new project. What could be the difference? edit- Same for os, btw. It's just missing some path somewhere...

    Read the article

  • Javascript, jQuery - how signed actions to just created object?

    - by sebap123
    I am trying to make simple list with ability to add and delete elements. For now I am working on adding and performing a simple action on each of list elements object (existing and added). Unfortunately I have met some difficulties with that. I am able to modify objects that are created at the beginning, but not one added during "webpage working". First of all my idea was to add AJAX to this, but I don't think it is the easiest way. I think that some time ago (I don't remember where) I read how to make this work, but now I don't know. I would be really glad if someone would help me with this or at least give a link to good explanation of this. There is what I have done so far (well this is mostly just a scratch, but the main idea is in it): http://jsfiddle.net/sebap123/pAZ7H/ Thank you for all responses.

    Read the article

  • How to Best Setup a Website Project in VS.NET

    - by Jason
    I have very little experience with setting up a website from scratch in a .NET environment. As I am doing this now, I am wondering - what's the best way to go? Is it better to create a new Website Project, and include the various backend services and database code as part of that project, or is it better to split out the various aspects of the project? If the second, how would I go about doing that? I want to ensure that this project is easy to manage in the future (in terms of source control, deployment, etc), so I want to make sure I'm starting off on the right foot. I was unable to find any tutorials online, but if you have any, I would appreciate those as well. Thanks!

    Read the article

  • How to extract data from Google Analytics and build a data warehouse (webhouse) from it?

    - by nkaur301
    I have click stream data such as referring URL, top landing pages, top exit pages and metrics such as page views, number of visits, bounces all in Google Analytics. I am required to build a data warehouse from scratch(which I believe is known as web-house) from this data. My questions are:- 1)Is it possible? Every day data increases (some in terms of metrics or measures such as visits and some in terms of new referring sites), how would the process of loading the warehouse go about? 2)What ETL tool would help me to achieve this? Pentaho I believe has a way to pull out data from Google Analytics, has anyone used it? How does that process go? Any references, links would be appreciated besides answers.

    Read the article

  • Is drupal cms suitable for this?

    - by Folle
    My aim is to create a website where I can add some test content along with pictures. Then, what exactly I need is an easy way to add those into the site. Say, I have designed (in photoshop) a certain theme for this purpose, and now I am wondering if I could use drupal for the matter? Or should I create a website from scratch? Which route is the easiest? Apply the theme to the drupal? I've just never used no CMS, that's why I ask.

    Read the article

  • Cant handle it (jQuery hanlder understanding needed)

    - by chainwork
    I'm embarrassed to even ask BUT could someone help me understand what a "handler" is. I am new to jQuery and the API constantly has references similar to the following: toggle( handler(eventObject), handler(eventObject), [ handler(eventObject) ] ) I scratch my head and say to myself "what the hell is a handler". Then I check my 2 jquery books and don't really see anything specific there. I get what an event handler does, it handles an event. But the word handler in the above context confuses me including "eventObject". I tried to google it but could not really find a really clear definition of what exactly a handler is as it relates to jquery. Thanks for your help =]

    Read the article

  • A single request appears to have come from all the browsers? Should I be worried?

    - by HorusKol
    I was looking over my site access logs when I noticed a request with the following user agent string: "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12\",\"Mozilla/5.0 (Windows; U; Windows NT 5.1; pl-PL; rv:1.8.1.24pre) Gecko/20100228 K-Meleon/1.5.4\",\"Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/540.0 (KHTML,like Gecko) Chrome/9.1.0.0 Safari/540.0\",\"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Comodo_Dragon/4.1.1.11 Chrome/4.1.249.1042 Safari/532.5\",\"Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.9.0.16) Gecko/2009122206 Firefox/3.0.16 Flock/2.5.6\",\"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/533.1 (KHTML, like Gecko) Maxthon/3.0.8.2 Safari/533.1\",\"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.8pre) Gecko/20070928 Firefox/2.0.0.7 Navigator/9.0RC1\",\"Opera/9.99 (Windows NT 5.1; U; pl) Presto/9.9.9\",\"Mozilla/5.0 (Windows; U; Windows NT 6.1; zh-HK) AppleWebKit/533.18.1 (KHTML, like Gecko) Version/5.0.2 Safari/533.18.5\",\"Seamonkey-1.1.13-1(X11; U; GNU Fedora fc 10) Gecko/20081112\",\"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; Zune 4.0; Tablet PC 2.0; InfoPath.3; .NET4.0C; .NET4.0E)\",\"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; MS-RTC LM 8; .NET4.0C; .NET4.0E; InfoPath.3)" The request appears to have originated from 91.121.153.210 - which appears to be owned by these guys: http://www.medialta.eu/accueil.html I find this rather impressive - a request from 'all' user-agents. There's actually quite a few of these requests over at least the few days - so it naturally piqued my interested. Searching Google simply seems to produce a very long list of websites which make their Apache access logs publicly available... Is this some weird indication that we're being targeted? And by who?

    Read the article

  • apt-get update cannot find ubuntu servers

    - by Phrogz
    Running sudo apt-get update fails on my server (that has a 'net connection). Are the servers temporarily broken, or is my apt misconfigured and using old servers? In short, how do I fix this? Here's the output: ~$ uname -a Linux nematode 2.6.28-19-server #66-Ubuntu SMP Sat Oct 16 18:41:24 UTC 2010 i686 GNU/Linux ~$ sudo apt-get update Err http://us.archive.ubuntu.com jaunty Release.gpg Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty/main Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty/restricted Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty/universe Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty/multiverse Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty-updates Release.gpg Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty-updates/main Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty-updates/restricted Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty-updates/universe Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty-updates/multiverse Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://security.ubuntu.com jaunty-security Release.gpg Could not resolve 'security.ubuntu.com' Err http://security.ubuntu.com jaunty-security/main Translation-en_US Could not resolve 'security.ubuntu.com' Err http://security.ubuntu.com jaunty-security/restricted Translation-en_US Could not resolve 'security.ubuntu.com' Err http://security.ubuntu.com jaunty-security/universe Translation-en_US Could not resolve 'security.ubuntu.com' Err http://security.ubuntu.com jaunty-security/multiverse Translation-en_US Could not resolve 'security.ubuntu.com' Reading package lists... Done W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty/Release.gpg Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty/main/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty/restricted/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty/universe/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty/multiverse/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty-updates/Release.gpg Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty-updates/main/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty-updates/restricted/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty-updates/universe/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty-updates/multiverse/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/jaunty-security/Release.gpg Could not resolve 'security.ubuntu.com' W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/jaunty-security/main/i18n/Translation-en_US.bz2 Could not resolve 'security.ubuntu.com' W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/jaunty-security/restricted/i18n/Translation-en_US.bz2 Could not resolve 'security.ubuntu.com' W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/jaunty-security/universe/i18n/Translation-en_US.bz2 Could not resolve 'security.ubuntu.com' W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/jaunty-security/multiverse/i18n/Translation-en_US.bz2 Could not resolve 'security.ubuntu.com' W: Some index files failed to download, they have been ignored, or old ones used instead. W: You may want to run apt-get update to correct these problems

    Read the article

  • Sneak peek at next generation Three MiFi unit – Huawei E585

    - by Liam Westley
    Last Wednesday I was fortunate to be invited to a sneak preview of the next generation Three MiFi unit, the Huawei E585. Many thanks to all those who posted questions both via this blog or via @westleyl on Twitter. I think I made sure I asked every question posed to the MiFi product manager from Three UK, and so here's the answers you were after. What is a MiFi? For those who are wondering, a MiFi unit is a 3G broadband modem combined with a WiFi access point, providing 3G broadband data access to up to five devices simultaneously via standard WiFi connections. What is different? It appears the prime task of enhancing the MiFi was to improve the user experience and user interface, both in terms of the device hardware and within the management software to configure the device.  I think this was a very sensible decision as these areas had substantial room for improvement. Single button operation to switch on, enable WiFi and connect to 3G Improved OELD display (see below), replacing the multi coloured LEDs; including signal strength, SMS notifications, the number of connected clients and data usage Management is via a web based dashboard accessible from any web browser. This is a big win for those running Linux, Mac OS/X, iPad users and, for me, as I can now configure the device from Windows 7 64-bit Charging is via micro USB, the new standard for small USB devices; you cannot use your old charger for the new MiFi unit Automatic reconnection when regaining a signal Improved charging time, which should allow recharging of the device when in use Although subjective, the black and silver design does look more classy than the silver and white plastic of the original MiFi What is the same? Virtually the same size and weight The battery is the same unit as the original MiFi so you’ll have a handy spare if you upgrade Data plans remain the same as the current MiFi, so cheapest price for upgraders will be £49 pay as you go Still only works on 3G networks, with no fallback to GPRS or EDGE There is no specific upgrade path for existing three customers, either from dongle or from the original MiFi My opinion I think three have concentrated on the correct areas of usability and user experience rather than trying to add new whizz bang technology features which aren’t of interest to mainstream users. The one button operation and the improved device display will make it much easier to use when out and about. If the automatic reconnection proves reliable that will remove a major bugbear that I experienced the previous evening when travelling on the First Great Western line from Paddington to Didcot Parkway.  The signal was repeatedly lost as we sped through tunnels and cuttings, and without automatic reconnection is was a real pain to keep pressing the data button on the MiFi to re-establish my data connection. And finally, the web based dashboard will mean I no longer need to resort to my XP based netbook to configure the SSID and password. My everyday laptop runs Windows 7 64-bit which appears to confuse the older 3 WiFi manager which cannot locate the MiFi when connected. Links to other sites, and other images of the device Good first impressions from Ben Smith, http://thereallymobileproject.com/2010/06/3uk-announce-a-new-mifi-with-a-screen/ Also, a round up of other sneak preview posts, http://www.3mobilebuzz.com/2010/06/11/mifi-round-two-your-view/ Pictures Here is a comparison of the old MiFi device next to the new device, complete with OLED display and the Huawei logo now being a prominent feature on the front of the device. One of my fellow bloggers had a Linux based netbook, showing off the web based dashboard complete with Text messages panel to manage SMS. And finally, I never thought that my blog sub title would ever end up printed onto a cup cake, ... and here's some of the other cup cakes ...

    Read the article

  • apt-get update cannot find ubuntu servers

    - by Phrogz
    Running sudo apt-get update fails on my server (that has a 'net connection). Are the servers temporarily broken, or is my apt misconfigured and using old servers? In short, how do I fix this? Here's the output: ~$ uname -a Linux nematode 2.6.28-19-server #66-Ubuntu SMP Sat Oct 16 18:41:24 UTC 2010 i686 GNU/Linux ~$ sudo apt-get update Err http://us.archive.ubuntu.com jaunty Release.gpg Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty/main Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty/restricted Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty/universe Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty/multiverse Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty-updates Release.gpg Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty-updates/main Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty-updates/restricted Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty-updates/universe Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty-updates/multiverse Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://security.ubuntu.com jaunty-security Release.gpg Could not resolve 'security.ubuntu.com' Err http://security.ubuntu.com jaunty-security/main Translation-en_US Could not resolve 'security.ubuntu.com' Err http://security.ubuntu.com jaunty-security/restricted Translation-en_US Could not resolve 'security.ubuntu.com' Err http://security.ubuntu.com jaunty-security/universe Translation-en_US Could not resolve 'security.ubuntu.com' Err http://security.ubuntu.com jaunty-security/multiverse Translation-en_US Could not resolve 'security.ubuntu.com' Reading package lists... Done W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty/Release.gpg Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty/main/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty/restricted/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty/universe/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty/multiverse/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty-updates/Release.gpg Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty-updates/main/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty-updates/restricted/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty-updates/universe/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty-updates/multiverse/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/jaunty-security/Release.gpg Could not resolve 'security.ubuntu.com' W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/jaunty-security/main/i18n/Translation-en_US.bz2 Could not resolve 'security.ubuntu.com' W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/jaunty-security/restricted/i18n/Translation-en_US.bz2 Could not resolve 'security.ubuntu.com' W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/jaunty-security/universe/i18n/Translation-en_US.bz2 Could not resolve 'security.ubuntu.com' W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/jaunty-security/multiverse/i18n/Translation-en_US.bz2 Could not resolve 'security.ubuntu.com' W: Some index files failed to download, they have been ignored, or old ones used instead. W: You may want to run apt-get update to correct these problems

    Read the article

  • USB software protection dongle for Java with an SDK which is cross-platform “for real”. Does it exist?

    - by Unai Vivi
    What I'd like to ask is if anybody knows about an hardware USB-dongle for software protection which offers a very complete out-of-the-box API support for cross-platform Java deployments. Its SDK should provide a jar (only one, not one different library per OS & bitness) ready to be added to one's project as a library. The jar should contain all the native stuff for the various OSes and bitnesses From the application's point of view, one should continue to write (api calls) once and run everywhere, without having to care where the end-user will run the software The provided jar should itself deal with loading the appropriate native library Does such a thing exist? With what I've tried so far, you have different APIs and compiled libraries for win32, linux32, win64, linux64, etc (or you even have to compile stuff yourself on the target machine), but hey, we're doing Java here, we don't know (and don't care) where the program will run! And we can't expect the end-user to be a software engineer, tweak (and break!) its linux server, link libraries, mess with gcc, litter the filesystem, etc... In general, Java support (in a transparent cross-platform fashion) is quite bad with the dongle SDKs I've evaluated so far (e.g. KeyLok and SecuTech's UniKey). I even purchased (no free evaluation kit available) SecureMetric SDKs&dongles (they should've been "soooo" straighforward to integrate -- according to marketing material :\ ) and they were the worst ever: SecureDongle X has no 64bit support and SecureDongle SD is not cross-platform at all. So, has anyone out there been through this and found the ultimate Java security usb dongle for cross-platform deployments? Note: software is low-volume, high-value; application is off-line (intranet with no internet access), so no online-activation alternatives and the like. -- EDIT Tried out HASP dongles (used to be called "Aladdin"), and added them to the no-no list: here, too, there is no out-of-the-box (out-of-the-jar) support: e.g. end-linux-user has to manually put the .so library (the specific file for the appropriate bitness) in the right place on his filesystem, and export an env. variable accordingly. -- EDIT 2 I really don't understand all the negativity and all the downvoting: is this a taboo topic? Is it so hard to understand that a freelance developer has to put food on the table everyday to feed its family and pay the bills at the end of the month? Please don't talk about "adding value" as a supplier, because that'd be off-topic. Furthermore I'm not in direct contact with end-customers, but there's an intermediate reselling entity: it's this entity I want to prevent selling copies of the software without sharing the revenue. -- EDIT 3 I'd like to emphasize the fact that the question is looking for a technical answer, not one about opinions concerning business models, philosophical lucubrations on the concept of value, resellers' reliability, etc. I cannot change resellers, because this isn't a "general purpose" kind of sw, but a very vertical one and (for some reasons it's not worth explaining here) I must go through them. I just need to prevent the "we sold 2 copies, here's your share [bwahaha we sold 10]" scenario.

    Read the article

  • VMWare player - compiling server modules - Ubuntu 13.10

    - by user211976
    While running Ubuntu 13.04 whenever the Linux kernel had been updated, this used to make vmware player happy: sudo apt-get install linux-headers-$(uname -r) sudo vmware-modconfig --console --install-all Yesterday I upgraded to Ubuntu 13.10 and lo and behold, the above workaround does not work anymore: Unable to install all modules. See log for details. I assume by "See log" it means the files in /tmp/vmware-root/*log root@hugin:/tmp/vmware-root# ls -ltr /tmp/vmware-root/ totalt 16 -rw-r--r-- 1 root root 3815 nov 6 13:54 vmware-apploader-17267.log -rw-r--r-- 1 root root 0 nov 6 13:54 vmware-vmis-17693.log -rw-r--r-- 1 root root 0 nov 6 13:54 vmware-vmis-17742.log -rw-r--r-- 1 root root 0 nov 6 13:54 vmware-vmis-18701.log -rw-r--r-- 1 root root 0 nov 6 13:54 vmware-vmis-18750.log -rw-r--r-- 1 root root 0 nov 6 13:54 vmware-vmis-19100.log -rw-r--r-- 1 root root 0 nov 6 13:54 vmware-vmis-19149.log -rw-r--r-- 1 root root 9250 nov 6 13:54 vmware-modconfig-17267.log root@hugin:/tmp/vmware-root# tail /tmp/vmware-root/vmware-modconfig-17267.log 2013-11-06T13:54:28.950+01:00| modconfig| I120: Copied Module.symvers from "/tmp/modconfig-wpDrtf/vmci-only/Module.symvers" to "/tmp/modconfig-wpDrtf/vsock-only/Module.symvers". 2013-11-06T13:54:28.950+01:00| modconfig| I120: Building module with command "/usr/bin/make -j8 -C /tmp/modconfig-wpDrtf/vsock-only auto-build HEADER_DIR=/lib/modules/3.11.0-12-generic/build/include CC=/usr/bin/gcc IS_GCC_3=no" 2013-11-06T13:54:31.048+01:00| modconfig| I120: Successfully built vsock. Module is currently at "/tmp/modconfig-wpDrtf/vsock.o". 2013-11-06T13:54:31.048+01:00| modconfig| I120: Found the vsock symvers file at "/tmp/modconfig-wpDrtf/vsock-only/Module.symvers". 2013-11-06T13:54:31.048+01:00| modconfig| I120: Installing vsock from /tmp/modconfig-wpDrtf/vsock.o to /lib/modules/3.11.0-12-generic/misc/vsock.ko. 2013-11-06T13:54:31.048+01:00| modconfig| I120: Registering file "/lib/modules/3.11.0-12-generic/misc/vsock.ko". 2013-11-06T13:54:31.400+01:00| modconfig| I120: "/usr/lib/vmware-installer/2.1.0/vmware-installer" exited with status 0. 2013-11-06T13:54:31.400+01:00| modconfig| I120: Registering file "/usr/lib/vmware/symvers/vsock-3.11.0-12-generic". 2013-11-06T13:54:31.764+01:00| modconfig| I120: "/usr/lib/vmware-installer/2.1.0vmware-installer" exited with status 0. 2013-11-06T13:54:31.786+01:00| modconfig| I120: We are now shutdown. Ready to die! root@hugin:/tmp/vmware-root# tail /tmp/vmware-root/vmware-apploader-17267.log 2013-11-06T13:54:20.911+01:00| appLoader| I120: libglib-2.0.so.0 <SYSTEM> 2013-11-06T13:54:20.911+01:00| appLoader| I120: libz.so.1 <SYSTEM> 2013-11-06T13:54:20.911+01:00| appLoader| I120: libvmware-modconfig-console.so <SHIPPED> 2013-11-06T13:54:20.912+01:00| appLoader| I120: Shipped glib version is 2.24 2013-11-06T13:54:20.912+01:00| appLoader| I120: System glib version is 2.38 2013-11-06T13:54:20.912+01:00| appLoader| I120: Using system version of glib. 2013-11-06T13:54:20.912+01:00| appLoader| I120: Loading system version of libgcc_s.so.1. 2013-11-06T13:54:20.912+01:00| appLoader| I120: Loading system version of libglib-2.0.so.0. 2013-11-06T13:54:20.912+01:00| appLoader| I120: Loading system version of libz.so.1. 2013-11-06T13:54:20.912+01:00| appLoader| I120: Loading shipped version of libxml2.so.2.

    Read the article

  • Intel Dual Band Wireless-AC 7260 keeps dropping wifi

    - by Rick T
    My wifi Intel Dual Band Wireless-AC 7260 keeps dropping wificonnection drops and the network to which I was connected disappears from the list of available networks in network manager. The only way to fix it is to disable wifi and re-enable it How can I fix this. I'm using ubuntu 14.04 64bit. It mostly drops connections on the 5ghz network. My other devices don't drop connections over wifi. see logs and versions rt@simon:~$ uname -a Linux simon 3.13.0-34-generic #60-Ubuntu SMP Wed Aug 13 15:45:27 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux rt@simon:~$ rt@simon:~$ dmesg | grep iwl [ 3.370777] iwlwifi 0000:03:00.0: irq 46 for MSI/MSI-X [ 3.381089] iwlwifi 0000:03:00.0: loaded firmware version 22.24.8.0 op_mode iwlmvm [ 3.414637] iwlwifi 0000:03:00.0: Detected Intel(R) Dual Band Wireless AC 7260, REV=0x144 [ 3.414695] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 3.414913] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 3.630208] ieee80211 phy0: Selected rate control algorithm 'iwl-mvm-rs' [ 9.304838] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 9.305068] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 605.483174] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 605.483396] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S rt@simon:~$ cat /var/log/syslog | grep -e iwl -e 80211 | tail -n25 Aug 14 08:13:02 simon kernel: [ 3.452780] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Aug 14 08:13:02 simon kernel: [ 3.630208] ieee80211 phy0: Selected rate control algorithm 'iwl-mvm-rs' Aug 14 08:13:06 simon NetworkManager[1125]: <info> rfkill1: found WiFi radio killswitch (at /sys/devices/pci0000:00/0000:00:1c.2/0000:03:00.0/ieee80211/phy0/rfkill1) (driver iwlwifi) Aug 14 08:13:06 simon NetworkManager[1125]: <info> (wlan0): using nl80211 for WiFi device control Aug 14 08:13:06 simon NetworkManager[1125]: <info> (wlan0): new 802.11 WiFi device (driver: 'iwlwifi' ifindex: 3) Aug 14 08:13:06 simon kernel: [ 9.304838] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S Aug 14 08:13:06 simon kernel: [ 9.305068] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S Aug 14 08:14:18 simon kernel: [ 81.230162] cfg80211: Calling CRDA to update world regulatory domain Aug 14 08:14:18 simon kernel: [ 81.232330] cfg80211: World regulatory domain updated: Aug 14 08:14:18 simon kernel: [ 81.232332] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) Aug 14 08:14:18 simon kernel: [ 81.232333] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Aug 14 08:14:18 simon kernel: [ 81.232334] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Aug 14 08:14:18 simon kernel: [ 81.232335] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) Aug 14 08:14:18 simon kernel: [ 81.232336] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Aug 14 08:14:18 simon kernel: [ 81.232337] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Aug 14 08:23:02 simon kernel: [ 605.483174] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S Aug 14 08:23:02 simon kernel: [ 605.483396] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S Aug 14 08:23:18 simon kernel: [ 621.223905] cfg80211: Calling CRDA to update world regulatory domain Aug 14 08:23:18 simon kernel: [ 621.228945] cfg80211: World regulatory domain updated: Aug 14 08:23:18 simon kernel: [ 621.228950] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) Aug 14 08:23:18 simon kernel: [ 621.228954] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Aug 14 08:23:18 simon kernel: [ 621.228956] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Aug 14 08:23:18 simon kernel: [ 621.228959] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) Aug 14 08:23:18 simon kernel: [ 621.228961] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Aug 14 08:23:18 simon kernel: [ 621.228963] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm)

    Read the article

  • Scid Chess Program Compiling issue

    - by lbochitt
    First of all I would like to point that I'm new to Ubuntu so sorry if what I am asking is ridiculous. I have downloaded Scid 4.4 chess program and I have tried to compile it as it was explained on its website: 1) Initialize git. 2) Create a folder where you want to download and (?) compile the source, then cast: git init on your command line. 3) You're now ready to download the sources recall Fulvio's spell: git clone git://scid.git.sourceforge.net/gitroot/scid/scid This should get you the latest Scid source. 4) You're now ready to compile Scid. In principle, all you need to do is: ./configure and then make 5) If you get stuck, you probably need to get developer versions of tcl/tk. This translates into issuing these three commands: sudo apt-get install tcl8.5-dev sudo apt-get install tk8.5-dev sudo apt-get install zlib1g -dev 6) You should now be ready to compile The problem is that when I run ./configure to start compiling the following message appears on Terminal: configure: Makefile configuration program for Scid Tcl/Tk version: 8.5 Your operating system is: Linux 3.8.0-19-generic Location of "tcl.h": /usr/include/tcl8.5 Location of "tk.h": /usr/include/tcl8.5 Location of Tcl 8.5 library: not found Location of Tk 8.5 library: not found Checking if your system already has zlib installed: yes. Using Makefile.conf. Not all settings could be determined! The default Makefile was written. You will need to edit it before you can compile Scid. What should I do? Has anybody faced this problem before? Thanks in advance I have run ls -l /usr/include/tcl8.5/tcl.h here's the result: -rw-r--r-- 1 root root 87291 abr 22 10:45 /usr/include/tcl8.5/tcl.h I have also tried what you suggested Could you run git reset --hard HEAD and git clean -d -f to clean up everything using Git? Then run ./configure again. Just a shot in the dark - I've seen some GNU automake stuff still listening to its "cached" version of the results or something. Still no solution. I don't know why it can't recognize the library though it is installed I opened configure to see where the program looked for the library. This is the code: # libraryPath: List of possible locations of Tcl/Tk library. set libraryPath { /usr/lib /usr/lib64 /usr/local/tcl/lib /usr/local/lib /usr/X11R6/lib /opt/tcltk/lib /usr/lib/x86_64-linux-gnu } lappend libraryPath "/usr/lib/tcl${tclv}" lappend libraryPath "/usr/lib/tk${tclv}" lappend libraryPath "/usr/lib/tcl${tclv_nodot}" lappend libraryPath "/usr/lib/tk${tclv_nodot}" # Try to add tcl_library and auto_path values to libraryPath, # in case the user has a non-standard Tcl/Tk library location: if {[info exists ::tcl_library]} { lappend headerPath \ [file join [file dirname [file dirname $::tcl_library]] include] lappend libraryPath [file dirname $::tcl_library] lappend libraryPath $::tcl_library } if {[info exists ::auto_path]} { foreach name $::auto_path { lappend libraryPath $name } } if {! [info exists var(TCL_INCLUDE)]} { puts -nonewline { Location of "tcl.h": } set opt(tcl_h) [findDir "tcl.h" $headerPath "TCL_VERSION.*$tclv"] if {$opt(tcl_h) == ""} { puts "not found" set success 0 set opt(tcl_h) "$::defaultVar(TCL_INCLUDE)" } else { puts $opt(tcl_h) } } set opt(tcl_lib) "" if {! [info exists var(TCL_LIBRARY)]} { puts -nonewline " Location of Tcl $tclv library: " set opt(tcl_lib) [findDir "libtcl${tclv}.*" $libraryPath] if {$opt(tcl_lib) == ""} { set opt(tcl_lib) [findDir "libtcl${tclv_nodot}.*" $libraryPath] if {$opt(tcl_lib) == ""} { puts "not found" set success 0 set opt(tcl_lib) "$opt(tcl_h)" set opt(tcl_lib_file) "tcl\$(TCL_VERSION)" } else { set opt(tcl_lib_file) "tcl${tclv_nodot}" puts $opt(tcl_lib) } } else { set opt(tcl_lib_file) "tcl\$(TCL_VERSION)" puts $opt(tcl_lib) } } if {! [info exists var(TCL_INCLUDE)]} { set var(TCL_INCLUDE) "-I$opt(tcl_h)" } if {! [info exists var(TCL_LIBRARY)]} { set var(TCL_LIBRARY) "-L$opt(tcl_lib) -l$opt(tcl_lib_file)" } return $success So I guess (And by guess I mean i have no idea how to code) I should write somewhere in here usr/lib/tcl8.5 and usr/lib/tk8.5, am I right?

    Read the article

  • How can I back up my ubuntu system?

    - by Eloff
    I'm sure there's a lot of questions on here similar to this, and I've been reading them, but I still feel this warrants a new question. I want nightly, incremental backups (full disk images would waste a lot of space - unless compressed somehow.) Preferably rotating or deleting old backups when running out of space or after a fixed number of backups. I want to be able to quickly and painlessly restore my system from these backups. This is my first time running ubuntu as my main development machine and I know from my experience with it as a server and in virtual machines that I regularly manage to make it unbootable or damage it to the point of being unable to rescue it. So how would you recommend I do this? There are so many options out there I really don't know where to start. There seems to be a vocal school of thought that it's sufficient to backup your home directory and the list of installed packages from the package manager. I've already installed lots of things from source, or outside of the package manager (development tools, ides, compilers, graphics drivers, etc.) So at the very least, if I do not back up the operating system itself I need to grab all config files, all program binaries, all created but required files, etc. I'd rather backup too much than too little - an ubuntu install is tiny anyway. Also this drastically reduces the restore time, which would cost me more in my time than the extra storage space. I tried using Deja Dup to backup the root partition, excluding some things like /mnt /media /dev /proc etc. Although many websites assured me you can backup a running linux system this way - that seems to be false as it complained that it could not backup the following files: /boot/System.map-3.0.0-17-generic /boot/System.map-3.2.0-22-generic /boot/vmcoreinfo-3.0.0-17-generic /boot/vmlinuz-3.0.0-17-generic /boot/vmlinuz-3.2.0-22-generic /etc/.pwd.lock /etc/NetworkManager/system-connections/LAN Connection /etc/apparmor.d/cache/lightdm-guest-session /etc/apparmor.d/cache/sbin.dhclient /etc/apparmor.d/cache/usr.bin.evince /etc/apparmor.d/cache/usr.lib.telepathy /etc/apparmor.d/cache/usr.sbin.cupsd /etc/apparmor.d/cache/usr.sbin.tcpdump /etc/apt/trustdb.gpg /etc/at.deny /etc/ati/inst_path_default /etc/ati/inst_path_override /etc/chatscripts /etc/cups/ssl /etc/cups/subscriptions.conf /etc/cups/subscriptions.conf.O /etc/default/cacerts /etc/fuse.conf /etc/group- /etc/gshadow /etc/gshadow- /etc/mtab.fuselock /etc/passwd- /etc/ppp/chap-secrets /etc/ppp/pap-secrets /etc/ppp/peers /etc/security/opasswd /etc/shadow /etc/shadow- /etc/ssl/private /etc/sudoers /etc/sudoers.d/README /etc/ufw/after.rules /etc/ufw/after6.rules /etc/ufw/before.rules /etc/ufw/before6.rules /lib/ufw/user.rules /lib/ufw/user6.rules /lost+found /root /run/crond.reboot /run/cups/certs /run/lightdm /run/lock/whoopsie/lock /run/udisks /var/backups/group.bak /var/backups/gshadow.bak /var/backups/passwd.bak /var/backups/shadow.bak /var/cache/apt/archives/lock /var/cache/cups/job.cache /var/cache/cups/job.cache.O /var/cache/cups/ppds.dat /var/cache/debconf/passwords.dat /var/cache/ldconfig /var/cache/lightdm/dmrc /var/crash/_usr_lib_x86_64-linux-gnu_colord_colord.102.crash /var/lib/apt/lists/lock /var/lib/dpkg/lock /var/lib/dpkg/triggers/Lock /var/lib/lightdm /var/lib/mlocate/mlocate.db /var/lib/polkit-1 /var/lib/sudo /var/lib/urandom/random-seed /var/lib/ureadahead/pack /var/lib/ureadahead/run.pack /var/log/btmp /var/log/installer/casper.log /var/log/installer/debug /var/log/installer/partman /var/log/installer/syslog /var/log/installer/version /var/log/lightdm/lightdm.log /var/log/lightdm/x-0-greeter.log /var/log/lightdm/x-0.log /var/log/speech-dispatcher /var/log/upstart/alsa-restore.log /var/log/upstart/alsa-restore.log.1.gz /var/log/upstart/console-setup.log /var/log/upstart/console-setup.log.1.gz /var/log/upstart/container-detect.log /var/log/upstart/container-detect.log.1.gz /var/log/upstart/hybrid-gfx.log /var/log/upstart/hybrid-gfx.log.1.gz /var/log/upstart/modemmanager.log /var/log/upstart/modemmanager.log.1.gz /var/log/upstart/module-init-tools.log /var/log/upstart/module-init-tools.log.1.gz /var/log/upstart/procps-static-network-up.log /var/log/upstart/procps-static-network-up.log.1.gz /var/log/upstart/procps-virtual-filesystems.log /var/log/upstart/procps-virtual-filesystems.log.1.gz /var/log/upstart/rsyslog.log /var/log/upstart/rsyslog.log.1.gz /var/log/upstart/ureadahead.log /var/log/upstart/ureadahead.log.1.gz /var/spool/anacron/cron.daily /var/spool/anacron/cron.monthly /var/spool/anacron/cron.weekly /var/spool/cron/atjobs /var/spool/cron/atspool /var/spool/cron/crontabs /var/spool/cups

    Read the article

  • Xorg does not see my monitor EDID

    - by sean farley
    Below is the output from my Xorg.0. X.Org X Server 1.11.3 Release Date: 2011-12-16 [ 22.311] X Protocol Version 11, Revision 0 [ 22.311] Build Operating System: Linux 2.6.42-23-generic x86_64 Ubuntu [ 22.311] Current Operating System: Linux sean-P55-USB3 3.2.0-34-generic #53-Ubuntu SMP Thu Nov 15 10:48:16 UTC 2012 x86_64 [ 22.311] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-34-generic root=UUID=0a34603e-aee9-44d1-8982-a5a5a38c3e4d ro quiet splash [ 22.311] Build Date: 29 August 2012 12:12:33AM [ 22.311] xorg-server 2:1.11.4-0ubuntu10.8 (For technical support please see http://www.ubuntu.com/support) [ 22.311] Current version of pixman: 0.24.4 [ 22.311] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 22.311] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 22.311] (==) Log file: "/var/log/Xorg.0.log", Time: Sat Nov 17 13:20:45 2012 [ 22.311] (==) Using config file: "/etc/X11/xorg.conf" [ 22.311] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 22.311] (==) No Layout section. Using the first Screen section. [ 22.311] (==) No screen section available. Using defaults. [ 22.311] (**) |-->Screen "Default Screen Section" (0) [ 22.311] (**) | |-->Monitor "<default monitor>" [ 22.311] (==) No device specified for screen "Default Screen Section". Using the first device section listed. [ 22.311] (**) | |-->Device "Default Device" [ 22.311] (==) No monitor specified for screen "Default Screen Section". Using a default monitor configuration. [ 22.311] (==) Automatically adding devices I have searched all over, and followed lots of dead ends and unanswered questions on this issue. I need to get this monitor recognised so I can use the native resolution of 1600x1200. The Nvidia driver in Windows has no problem with this. The monitor is an old Iiyama HM204DT A. Is there a way of configuring Xorg manually to get these working? I have tried xrandr but this will not work. Output:- sean@sean-P55-USB3:~$ xrandr Screen 0: minimum 8 x 8, current 1152 x 864, maximum 16384 x 16384 DVI-I-0 connected 1152x864+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1024x768 60.0 + 1360x768 60.0 59.8 1152x864 60.0* 800x600 72.2 60.3 56.2 680x384 119.9 119.6 640x480 59.9 512x384 120.0 400x300 144.4 320x240 120.1 DVI-I-1 disconnected (normal left inverted right x axis y axis) DVI-I-2 disconnected (normal left inverted right x axis y axis) HDMI-0 disconnected (normal left inverted right x axis y axis) DVI-I-3 disconnected (normal left inverted right x axis y axis) Tried Nvidia Xorg.config: sean@sean-P55-USB3:~$ sudo nvidia-xconfig [sudo] password for sean: Using X configuration file: "/etc/X11/xorg.conf". VALIDATION ERROR: Data incomplete in file /etc/X11/xorg.conf. Device section "Default Device" must have a Driver line. Backed up file '/etc/X11/xorg.conf' as '/etc/X11/xorg.conf.nvidia-xconfig-original' Backed up file '/etc/X11/xorg.conf' as '/etc/X11/xorg.conf.backup' New X configuration file written to '/etc/X11/xorg.conf' How do I insert a driver line? This is a bit of a pain as I want to use my Vectorworks cad program in a WinXP Vbox at 1600x1200 but all virtual drives are restricted to the host screen resolution. Do i need to manually create EIDI info in Xorg? I am slightly confused about how Xorg and Nvidia relate Please help

    Read the article

  • Plymouth and GRUB do not show at all

    - by WarriorIng64
    I am using Ubuntu 11.04 64-bit as my only OS on my desktop computer, which used to only run Ubuntu 10.04 LTS until I had the time to upgrade it with a fresh install. It uses integrated NVIDIA graphics (listed as a GeForce 6150SE nForce 430 by the NVIDIA X Server Settings utility) with the current proprietary driver as provided by the Additional Drivers utility, and has a VGA connection to a 1680x1050 Acer monitor. I used to get the (ugly-looking version of) Plymouth graphical boot screen while under 10.04. It didn't look that great, but I was fine with it. Now, it doesn't show on 11.04 at all during boot (I just get an error message in a moving gray box from the monitor saying "Input Not Supported"), and only rarely it will show on shutdown, all garbled up. I could not get GRUB to show during boot while holding down Shift, either (same error message), but pressing Enter while it should be up starts the system normally. A picture of the error message I was getting: Once fully booted, the system still shows the login screen and desktop just fine. Any information on how to troubleshoot this would be appreciated. If there's any hardware-specific stuff I forgot to include here, let me know the relevant commands to run in a comment below. Things that I've tried: Running plymouth in a framebuffer: no effect Booting with nomodeset as my grub boot: option no effect Booting with nomodeset and plymouth in a framebuffer: no effect other than Plymouth showing during shutdown only Following the Softpedia instructions for fixing Plymouth's resolution: Problem mostly solved, except logo does not show in Plymouth during boot, and both grub and Plymouth are slightly off-center #4 above, but with nomodeset removed as a grub boot option: same effect as #4 #5 above, but with vt.handoff=7 added as a grub boot option: same effect as #4 I have added the current contents of /etc/default/grub as requested in the comments: # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash video=uvesafb:mode_option=1280x1024-24,mtrr=3,scroll=ywrap" GRUB_CMDLINE_LINUX="" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' GRUB_GFXMODE=1280x1024 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" CURRENT STATUS: I forgot to uncomment one line as per "things that I've tried" #4, so I took care of that. I can now see GRUB during startup when I hold Shift and a normal-looking Plymouth during shutdown...but Plymouth during boot is now just a solid purple screen. In each case, it's displayed a little off-center to the left, with a thin black bar running down the right side of the monitor. The error pictured above no longer shows. I'd say this problem is about 2/3 solved now. UPDATE: After Natty started freezing up on me, I decided to dual-boot with Oneiric, which unfortunately shows the same problems. Rather than trying all these workarounds though, I decided to do what I should have done from the start and file a pair of bug reports. LAST UPDATE: Bug 850908 has been confirmed as a legitimate nouveaufb bug. I have overwritten my 11.04 partition with 12.04 LTS, and I can confirm at this time that the issue is present there, as well. I will now flag this question to be closed, yet I hope it was helpful for anyone who experienced similar issues; if you are still having the same problem as me, please go there and mark yourself as affected. Thanks!

    Read the article

  • Banshee encountered a Fatal Error (sqlite error 11: database disk image is malformed)

    - by Nik
    I am running ubuntu 10.10 Maverick Meerkat, and recently I am helping in testing out indicator-weather using the unstable buids. However there was a bug which caused my system to freeze suddenly (due to indicator-weather not ubuntu) and the only way to recover is to do a hard reset of the system. This happened a couple of times. And when i tried to open banshee after a couple of such resets I get the following fatal error which forces me to quit banshee. The screenshot is not clear enough to read the error, so I am posting it below, An unhandled exception was thrown: Sqlite error 11: database disk image is malformed (SQL: BEGIN TRANSACTION; DELETE FROM CoreSmartPlaylistEntries WHERE SmartPlaylistID IN (SELECT SmartPlaylistID FROM CoreSmartPlaylists WHERE IsTemporary = 1); DELETE FROM CoreSmartPlaylists WHERE IsTemporary = 1; COMMIT TRANSACTION) at Hyena.Data.Sqlite.Connection.CheckError (Int32 errorCode, System.String sql) [0x00000] in <filename unknown>:0 at Hyena.Data.Sqlite.Connection.Execute (System.String sql) [0x00000] in <filename unknown>:0 at Hyena.Data.Sqlite.HyenaSqliteCommand.Execute (Hyena.Data.Sqlite.HyenaSqliteConnection hconnection, Hyena.Data.Sqlite.Connection connection) [0x00000] in <filename unknown>:0 Exception has been thrown by the target of an invocation. at System.Reflection.MonoCMethod.Invoke (System.Object obj, BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in <filename unknown>:0 at System.Reflection.MonoCMethod.Invoke (BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in <filename unknown>:0 at System.Reflection.ConstructorInfo.Invoke (System.Object[] parameters) [0x00000] in <filename unknown>:0 at System.Activator.CreateInstance (System.Type type, Boolean nonPublic) [0x00000] in <filename unknown>:0 at System.Activator.CreateInstance (System.Type type) [0x00000] in <filename unknown>:0 at Banshee.Gui.GtkBaseClient.Startup () [0x00000] in <filename unknown>:0 at Hyena.Gui.CleanRoomStartup.Startup (Hyena.Gui.StartupInvocationHandler startup) [0x00000] in <filename unknown>:0 .NET Version: 2.0.50727.1433 OS Version: Unix 2.6.35.27 Assembly Version Information: gkeyfile-sharp (1.0.0.0) Banshee.AudioCd (1.9.0.0) Banshee.MiniMode (1.9.0.0) Banshee.CoverArt (1.9.0.0) indicate-sharp (0.4.1.0) notify-sharp (0.4.0.0) Banshee.SoundMenu (1.9.0.0) Banshee.Mpris (1.9.0.0) Migo (1.9.0.0) Banshee.Podcasting (1.9.0.0) Banshee.Dap (1.9.0.0) Banshee.LibraryWatcher (1.9.0.0) Banshee.MultimediaKeys (1.9.0.0) Banshee.Bpm (1.9.0.0) Banshee.YouTube (1.9.0.0) Banshee.WebBrowser (1.9.0.0) Banshee.Wikipedia (1.9.0.0) pango-sharp (2.12.0.0) Banshee.Fixup (1.9.0.0) Banshee.Widgets (1.9.0.0) gio-sharp (2.14.0.0) gudev-sharp (1.0.0.0) Banshee.Gio (1.9.0.0) Banshee.GStreamer (1.9.0.0) System.Configuration (2.0.0.0) NDesk.DBus.GLib (1.0.0.0) gconf-sharp (2.24.0.0) Banshee.Gnome (1.9.0.0) Banshee.NowPlaying (1.9.0.0) Mono.Cairo (2.0.0.0) System.Xml (2.0.0.0) Banshee.Core (1.9.0.0) Hyena.Data.Sqlite (1.9.0.0) System.Core (3.5.0.0) gdk-sharp (2.12.0.0) Mono.Addins (0.4.0.0) atk-sharp (2.12.0.0) Hyena.Gui (1.9.0.0) gtk-sharp (2.12.0.0) Banshee.ThickClient (1.9.0.0) Nereid (1.9.0.0) NDesk.DBus.Proxies (0.0.0.0) Mono.Posix (2.0.0.0) NDesk.DBus (1.0.0.0) glib-sharp (2.12.0.0) Hyena (1.9.0.0) System (2.0.0.0) Banshee.Services (1.9.0.0) Banshee (1.9.0.0) mscorlib (2.0.0.0) Platform Information: Linux 2.6.35-27-generic i686 unknown GNU/Linux Disribution Information: [/etc/lsb-release] DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.10 DISTRIB_CODENAME=maverick DISTRIB_DESCRIPTION="Ubuntu 10.10" [/etc/debian_version] squeeze/sid Just to make it clear, this happened only after the hard resets and not before. I used to use banshee everyday and it worked perfectly. Can anyone help me fix this?

    Read the article

  • Introducing Oracle System Assistant

    - by B.Koch
    by Josh Rosen One of the challenges with today's servers is getting the server up and running and understanding what all of the steps are once you plug the server in for the first time. So many different pieces come into play: installing drivers, updating firmware, configuring RAID, and provisioning the operating system. All of these steps must be done before you can even start using the server. Finding the latest firmware and drivers, making sure you have the right versions, and knowing that all the different software and firmware components work together properly can be a real challenge. If not done correctly, such as if you separately downloading disk firmware or controller firmware that doesn't match the existing OS drivers, you could experience bugs, performance problems, and incompatibilities. Gone are the days of having to locate the tools and drivers media that shipped with the server only to find out that newer versions of software and firmware are available on the web. Oracle has solved these challenges in the new X3-2 family of servers by introducing Oracle System Assistant. Oracle System Assistant is an innovative tool that is built-in to every new x86 server. It provides step-by-step assistance with configuring the server, updating firmware and drivers, and provisioning the operating system. Once you have completed all of the steps in the Oracle System Assistant tool, the server is ready to use. Oracle System Assistant was designed to be easy and straightforward. Starting it is as simple as pressing F9 when the server is booting. You'll need a keyboard, monitor, and mouse or you can use the remote console feature of Oracle ILOM (Integrated Lights Out Manager) to access a virtual KVM to the server from any machine. From there Oracle System Assistant will walk you through each of the steps necessary to set up your server. After configuring the network settings for Oracle System Assistant, the next step is to check for any new software or firmware for the server. Oracle System Assistant connects back to Oracle using your My Oracle Support account and downloads any updates that were made available to you for this specific server. This is where you really start to see the innovation that went into Oracle System Assistant. Firmware for Oracle ILOM and BIOS, operating system drivers, and other system firmware (including for option cards and disk drivers) come as a single bundle, downloading as a single unit, that has been engineered and tested to work together by Oracle. Oracle System Assistant figures out the right combination for your server, so you don't have to. Now that the server has the latest firmware, Oracle System Assistant will next walk you through configuring the hardware. From Oracle System Assistant, you can configure many Oracle ILOM settings, including the network settings and initial user accounts. This ensures that ILOM is accessible and ready to use. Oracle System Assistant is where all parts of the server come together. In addition to communicating with Oracle ILOM and interacting with BIOS, Oracle System Assistant understands and can configure the storage subsystem. Before installing the operating system, Oracle System Assistant can detect the storage configuration and configure RAID for all disks in the system. At this point, the server is ready to be provisioned with the host operating system. You can use Oracle System Assistant to provision a supported OS, including Oracle Linux, Oracle VM, RHEL, SuSe Linux, and Windows. And by using Oracle System Assistant, you can be sure that the proper OS drivers are installed for each of the installed hardware components. With Oracle System Assistant, initial setup of the server has never been easier. If we can innovate around problems and find solutions to make our servers easier to manage, this reduces IT costs and makes managing servers simpler. I think with Oracle System Assistant we have done just that. Josh Rosen is a Principal Product Manager at Oracle and previously spent more than a decade as a developer and architect of system management software. Josh has worked on system management for many of Oracle's hardware products ranging from the earliest blade systems to the latest Oracle x86 servers.

    Read the article

  • Building a Data Mart with Pentaho Data Integration Video Review by Diethard Steiner, Packt Publishing

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2014/06/01/building-a-data-mart-with-pentaho-data-integration-video-review.aspx The Building a Data Mart with Pentaho Data Integration Video by Diethard Steiner from Packt Publishing is more than just a course on how to use Pentaho Data Integration, it also implements and uses the principals of the Data Warehousing (and I even heard the name of Ralph Kimball in the video). Indeed, a video watcher should be familiar with its concepts as the Star Schema, Slowly Changing Dimension types, etc. so I suggest prior to watching this course to consider skimming through the Data Warehouse concepts (if unfamiliar) or even better, read the excellent Ralph’s The Data Warehouse Tooolkit. By the way, the author expands beyond using Pentaho along to MySQL and MonetDB which is a real icing on the cake! Indeed, I even suggest the name of the course should be ‘Building a Data Warehouse with Pentaho’. To successfully complete the course one needs to know some Linux (Ubuntu used in the course), the VI editor and the Bash command shell, but it seems that similar requirements would also apply to the Weindows OS. Additionally, knowing some basic SQL would not hurt. As I had said, MonetDB is used in this course several times which seems to be not anymore complex than say MySQL, but based on what I read is very well suited for fast querying big volumes of data thanks to having a columnstore (vertical data storage). I don’t see what else can be a barrier, the material is very digestible. On this note, I must add that the author does not cover how to acquire the software, so here is what I found may help: Pentaho: the free Community Edition must be more than anyone needs to learn it. Or even go into a POC. MonetDB can be downloaded (exists for both, Linux and Windows) from http://goo.gl/FYxMy0 (just see the appropriate link on the left). The author seems to be using Eclipse to run SQL code, one can get it from http://goo.gl/5CcuN. To create, or edit database entities and/or schema otherwise one can use a universal tool called SQuirreL, get it from http://squirrel-sql.sourceforge.net.   Next, I must confess Diethard is very knowledgeable in what he does and beyond. However, there will be some accent heard to the user of the course especially if one’s mother tongue language is English, but it I got over it in a few chapters. I liked the rate at which the material is being presented, it makes me feel I paid for every second Eventually, my impressions are: Pentaho is an awesome ETL offering, it is worth learning it very much (I am an ETL fan and a heavy user of SSIS) MonetDB is nice, it tickles my fancy to know it more Data Warehousing, despite all the BigData tool offerings (Hive, Scoop, Pig on Hadoop), using the traditional tools still rocks Chapters 2 to 6 were the most fun to me with chapter 8 being the most difficult.   In terms of closing, I highly recommend this video to anyone who needs to grasp Pentaho concepts quick, likewise, the course is very well suited for any developer on a “supposed to be done yesterday” type of a project. It is for a beginner to intermediate level ETL/DW developer. But one would need to learn more on Data Warehousing and Pentaho, for such I recommend the 5 star Pentaho Data Integration 4 Cookbook. Enjoy it! Disclaimer: I received this video from the publisher for the purpose of a public review.

    Read the article

  • Building a Data Mart with Pentaho Data Integration Video Review by Diethard Steiner, Packt Publishing

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2014/06/01/building-a-data-mart-with-pentaho-data-integration-video-review-again.aspx The Building a Data Mart with Pentaho Data Integration Video by Diethard Steiner from Packt Publishing is more than just a course on how to use Pentaho Data Integration, it also implements and uses the principals of the Data Warehousing (and I even heard the name of Ralph Kimball in the video). Indeed, a video watcher should be familiar with its concepts as the Star Schema, Slowly Changing Dimension types, etc. so I suggest prior to watching this course to consider skimming through the Data Warehouse concepts (if unfamiliar) or even better, read the excellent Ralph’s The Data Warehouse Tooolkit. By the way, the author expands beyond using Pentaho along to MySQL and MonetDB which is a real icing on the cake! Indeed, I even suggest the name of the course should be ‘Building a Data Warehouse with Pentaho’. To successfully complete the course one needs to know some Linux (Ubuntu used in the course), the VI editor and the Bash command shell, but it seems that similar requirements would also apply to the Windows OS. Additionally, knowing some basic SQL would not hurt. As I had said, MonetDB is used in this course several times which seems to be not anymore complex than say MySQL, but based on what I read is very well suited for fast querying big volumes of data thanks to having a columnstore (vertical data storage). I don’t see what else can be a barrier, the material is very digestible. On this note, I must add that the author does not cover how to acquire the software, so here is what I found may help: Pentaho: the free Community Edition must be more than anyone needs to learn it. Or even go into a POC. MonetDB can be downloaded (exists for both, Linux and Windows) from http://goo.gl/FYxMy0 (just see the appropriate link on the left). The author seems to be using Eclipse to run SQL code, one can get it from http://goo.gl/5CcuN. To create, or edit database entities and/or schema otherwise one can use a universal tool called SQuirreL, get it from http://squirrel-sql.sourceforge.net.   Next, I must confess Diethard is very knowledgeable in what he does and beyond. However, there will be some accent heard to the user of the course especially if one’s mother tongue language is English, but it I got over it in a few chapters. I liked the rate at which the material is being presented, it makes me feel I paid for every second Eventually, my impressions are: Pentaho is an awesome ETL offering, it is worth learning it very much (I am an ETL fan and a heavy user of SSIS) MonetDB is nice, it tickles my fancy to know it more Data Warehousing, despite all the BigData tool offerings (Hive, Scoop, Pig on Hadoop), using the traditional tools still rocks Chapters 2 to 6 were the most fun to me with chapter 8 being the most difficult.   In terms of closing, I highly recommend this video to anyone who needs to grasp Pentaho concepts quick, likewise, the course is very well suited for any developer on a “supposed to be done yesterday” type of a project. It is for a beginner to intermediate level ETL/DW developer. But one would need to learn more on Data Warehousing and Pentaho, for such I recommend the 5 star Pentaho Data Integration 4 Cookbook. Enjoy it! Disclaimer: I received this video from the publisher for the purpose of a public review.

    Read the article

  • Installing CUDA on Ubuntu 12.04 with nvidia driver 295.59

    - by johnmcd
    I have been trying to get cuda to run on a nvidia gt 650m based laptop. I am running Ubuntu 12.04 with the nvidia 295.59 driver. Also, my laptop uses Optimus so I have install the driver via bumblebee. Bumblebee is not working correctly yet -- however I believe it is possible to install CUDA independently. To install CUDA I have followed the instructions detailed here: How can I get nVidia CUDA or OpenCL working on a laptop with nVidia discrete card/Intel Integrated Graphics? However I am still running into problem building the sdk. I made the changes specified at the above link in common.mk, but I got the following (snippet) from the build process: make[2]: Entering directory `/home/john/NVIDIA_GPU_Computing_SDK/C/src/fluidsGL' /usr/bin/ld: warning: libnvidia-tls.so.302.17, needed by /usr/lib/nvidia-current/libGL.so, not found (try using -rpath or -rpath-link) /usr/bin/ld: warning: libnvidia-glcore.so.302.17, needed by /usr/lib/nvidia-current/libGL.so, not found (try using -rpath or -rpath-link) /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv018tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv012glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv017glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv012tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv015tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv019tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv000glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv017tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv013tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv013glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv018glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv022tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv007tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv009tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv020tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv014glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv015glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv016tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv001glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv006tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv021tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv011tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv020glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv019glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv002glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv021glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv014tls' collect2: ld returned 1 exit status make[2]: *** [../../bin/linux/release/fluidsGL] Error 1 make[2]: Leaving directory `/home/john/NVIDIA_GPU_Computing_SDK/C/src/fluidsGL' make[1]: *** [src/fluidsGL/Makefile.ph_build] Error 2 make[1]: Leaving directory `/home/john/NVIDIA_GPU_Computing_SDK/C' make: *** [all] Error 2 The libraries that ld warns about are on my system and are installed on the system: $ locate libnvidia-tls.so.302.17 libnvidia-glcore.so.302.17 /usr/lib/nvidia-current/libnvidia-glcore.so.302.17 /usr/lib/nvidia-current/libnvidia-tls.so.302.17 /usr/lib/nvidia-current/tls/libnvidia-tls.so.302.17 /usr/lib32/nvidia-current/libnvidia-glcore.so.302.17 /usr/lib32/nvidia-current/libnvidia-tls.so.302.17 /usr/lib32/nvidia-current/tls/libnvidia-tls.so.302.17 however /usr/lib/nvidia-current and /usr/lib32/nvidia-current are not being picked up by ldconfig. I have tried adding them by adding a file to /etc/ld.so.conf.d/ which gets past this error, however now I am getting the following error: make[2]: Entering directory `/home/john/NVIDIA_GPU_Computing_SDK/C/src/deviceQueryDrv' cc1plus: warning: command line option ‘-Wimplicit’ is valid for C/ObjC but not for C++ [enabled by default] obj/x86_64/release/deviceQueryDrv.cpp.o: In function `main': deviceQueryDrv.cpp:(.text.startup+0x5f): undefined reference to `cuInit' deviceQueryDrv.cpp:(.text.startup+0x99): undefined reference to `cuDeviceGetCount' deviceQueryDrv.cpp:(.text.startup+0x10b): undefined reference to `cuDeviceComputeCapability' deviceQueryDrv.cpp:(.text.startup+0x127): undefined reference to `cuDeviceGetName' deviceQueryDrv.cpp:(.text.startup+0x16a): undefined reference to `cuDriverGetVersion' deviceQueryDrv.cpp:(.text.startup+0x1f0): undefined reference to `cuDeviceTotalMem_v2' deviceQueryDrv.cpp:(.text.startup+0x262): undefined reference to `cuDeviceGetAttribute' deviceQueryDrv.cpp:(.text.startup+0x457): undefined reference to `cuDeviceGetAttribute' deviceQueryDrv.cpp:(.text.startup+0x4bc): undefined reference to `cuDeviceGetAttribute' deviceQueryDrv.cpp:(.text.startup+0x502): undefined reference to `cuDeviceGetAttribute' deviceQueryDrv.cpp:(.text.startup+0x533): undefined reference to `cuDeviceGetAttribute' obj/x86_64/release/deviceQueryDrv.cpp.o:deviceQueryDrv.cpp:(.text.startup+0x55e): more undefined references to `cuDeviceGetAttribute' follow collect2: ld returned 1 exit status make[2]: *** [../../bin/linux/release/deviceQueryDrv] Error 1 make[2]: Leaving directory `/home/john/NVIDIA_GPU_Computing_SDK/C/src/deviceQueryDrv' make[1]: *** [src/deviceQueryDrv/Makefile.ph_build] Error 2 make[1]: Leaving directory `/home/john/NVIDIA_GPU_Computing_SDK/C' make: *** [all] Error 2 I would appreciate any help that anyone can provide me with. If I can provide any further information please let me know. Thanks.

    Read the article

  • Using the SOA-BPM VIrtualBox Appliance

    - by antony.reynolds
    Quickstart Guide to Using Oracle Appliance for SOA/BPM Recently I have been setting up some machines for fellow engineers.  My base setup consists of Oracle Enterprise Linux with Oracle Virtual Box.  Note that after installing VirtualBox I needed to add the VirtualBox Extension Pack to enable RDP access amongst other features.  In order to get them started quickly with some images I downloaded the pre-built appliance for SOA/BPM from OTN. Out of the box this provides a VirtualBox image that is pre-installed with everything you will need to develop SOA/BPM applications. Specifically by using the virtual appliance I got the following pre-installed and configured. Oracle Enterprise Linux 5 User oracle password oracle User root password oracle. Oracle Database XE Pre-configured with SOA/BPM repository. Set to auto-start on OS startup. Oracle SOA Suite 11g PS2 Configured with a “collapsed domain”, all services (SOA/BAM/EM) running in AdminServer. Listening on port 7001 Oracle BPM Suite 11g Configured in same domain as SOA Suite. Oracle JDeveloper 11g With SOA/BPM extensions. Networking The VM by default uses NAT (Network Address Translation) for network access.  Make sure that the advanced settings for port forwarding allow access through the host to guest ports.  It should be pre-configured to forward requests on the following ports Purpose Host Port Guest Port (VBox Image) SSH 2222 22 HTTP 7001 7001 Database 1521 1521 Note that only one VirtualBox image can use a given host port, so make sure you are not clashing if it seems not to work. What’s Left to Do? There is still some customization of the environment that may be required. If you need to configure a proxy server as I did then for the oracle and root users to set up an HTTP proxy Added “export http_proxy=http://proxy-host:proxy-port” to ~oracle/.bash_profile and ~root/.bash_profile Added “export http_proxy=http://proxy-host:proxy-port” to /etc/.bashrc Edited System->Preferences to set Network Proxy In Firefox set Preferences->Network->Connection Settings to “Use system proxy settings” In JDeveloper set Edit->Preferences->Web Browser and Proxy to required proxy settings You may need to configure yum to point to a public OEL yum repository – such as http://public-yum.oracle.com. If you are going to be accessing the SOA server from outside the VirtualBox image then you may want to set the soa-infra Server URLs to be the hostname of the host OS. Snap! Once I had the machine configured how I wanted to use it I took a snapshot so that I can always get back to the pristine install I have now.  Snapshots are one of the big benefits of putting a development environment into a virtualized environment.  I can make changes to my installation and if I mess it up I can restore the image to a last known good snapshot. Hey Presto!, Ready to Go This is the quickest way to get up and running with SOA/BPM Suite.  Out of the box the download will work, I only did extra customization so I could use services outside the firewall and browse outside the firewall from within by SOA VirtualBox image.  I also use yum to update the OS to the latest binaries. So have fun.

    Read the article

< Previous Page | 722 723 724 725 726 727 728 729 730 731 732 733  | Next Page >