Search Results

Search found 20201 results on 809 pages for 'more info needed'.

Page 412/809 | < Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >

  • Can I set up samba so it automatically allows all the local usernames and passwords?

    - by dialer
    I have set up samba like this (this is the complete smb.conf): [global] log file = /var/log/samba/log log level = 2 security = user [homes] browsable = false read only = no valid users = %S I'd like to enable every user on server to access their home directories, but for some unknown reason only my 'administrator' account can do so. (I have done that with ftp before, but now smb is also needed). When I try to smbclient -L localhost -U [user], I get NT_STATUS_LOGON_FAILURE, except with the administrator (which is the user created during the ubuntu installation, not root). The samba log file says NT_STATUS_NO_SUCH_USER: [2012/04/04 20:26:02.081454, 2] smbd/reply.c:554(reply_special) netbios connect: name1=LOCALHOST 0x20 name2=DIALER-X 0x0 [2012/04/04 20:26:02.081733, 2] smbd/reply.c:565(reply_special) netbios connect: local=localhost remote=dialer-x, name type = 0 [2012/04/04 20:26:02.087200, 2] auth/auth.c:314(check_ntlm_password) check_ntlm_password: Authentication for user [public] - [public] FAILED with error NT_STATUS_NO_SUCH_USER I suspect that I have to manually create samba users, but the man pages state that If the client has passed a username/password pair and that username/password pair is validated by the UNIX system's password programs, the connection is made as that username. To me that sounds like as long as the provided username/password is a valid login on the server, it should work. Am I missing something totally obvious? I don't want / can't afford to manually update the samba users and passwords to match the server's. 11.10

    Read the article

  • "Unmet Dependencies" problem when trying apt-get install

    - by GChorn
    Anytime I try to install python packages using the command: sudo apt-get install python-package I get the following output: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: linux-headers-generic : Depends: linux-headers-3.2.0-36-generic but it is not going to be installed linux-headers-generic-pae : Depends: linux-headers-3.2.0-36-generic-pae but it is not going to be installed linux-image-generic : Depends: linux-image-3.2.0-36-generic but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). This seems to have started when these same three packages showed up in Ubuntu's Update Manager and kicked an error when I tried to install them there. Based on the suggestion in the output above, I tried running: sudo apt-get -f install But this only gave me several instances of the following error: dpkg: error processing /var/cache/apt/archives/linux-image-3.2.0-36-generic_3.2.0-36.57_i386.deb (--unpack): unable to create `/lib/modules/3.2.0-36-generic/kernel/drivers/net/wireless/ath/carl9170/carl9170.ko.dpkg-new' (while processing `./lib/modules/3.2.0-36-generic/kernel/drivers/net/wireless/ath/carl9170/carl9170.ko'): No space left on device Now maybe I'm way off-base here, but I'm wondering if the error could be coming from the "No space left on device" part? The thing is, I'm running Ubuntu as a VirtualBox VM but I've got it set to dynamically increase its virtual hard drive space as needed, so why am I still getting this error? Here's my output when I use dh -f: Filesystem Size Used Avail Use% Mounted on /dev/sda1 6.9G 5.7G 869M 88% / udev 494M 4.0K 494M 1% /dev tmpfs 201M 784K 200M 1% /run none 5.0M 0 5.0M 0% /run/lock none 501M 76K 501M 1% /run/shm VB_Shared_Folder 466G 271G 195G 59% /media/sf_VB_Shared_Folder When I perform sudo apt-get -f install and the system says, After this operation, 192 MB of additional disk space will be used. Does that mean 192 MB of my virtual machine's current memory, or 192 MB on top of the rest of my free space? As I said, my machine normally dynamically allocates additional memory from the host machine, so I don't see why there would be memory restrictions at all...

    Read the article

  • What's the best way to handle numerous recurring log entries in game loop?

    - by Kaa
    I have a custom logging system, use of which is scattered all over the engine and game. The system is linked to a "LogStore" that has an std::vector<string> logs[NUM_LOG_TYPES] - each vector corresponds with it's log type (info, error, debug, etc.). There's one extra std::vector that has "coordinates" to all log entries in the order they were received. Now, all the logging output is also displayed inside my development console in the game. The game console is handled by HTML-type GUI and therefore requires a new <p> element being added for each log output. My problem is that the log entries that are generated in the main loop each frame freeze the engine, because they continue to add elements to the in-game console, and if the console or guy generates a warning - that creates an infinite logging loop. I want to solve it by handling the recurring log entries in an elegant way that lets you know that something is critically wrong, but won't freeze the engine - like displaying the count of errors in the last 60 frames instead of displaying errors themselves. But how do you guys handle this? Does anyone know any nifty tricks to do this? I understand the question may sound vague, but if someone came across this type of issue I'm sure they would know exactly what's happening. Example problematic log entries: OpenGL warnings (I actually do check for errors every frame in many places) Really any prints anywhere in the main loop (may be debugging, may be warnings)

    Read the article

  • Best practice- handling images on website

    - by Steve
    I am porting an old eCommerce site to MVC 3 and would like to take advantage of design improvements. The site currently has product images stored in 3 sizes: thumbnail, medium (for display in a list) and expanded for a zoomed look. Right now we are having to upload 3 separate images that are sized exactly right, provide 3 different names that match what the site expects, etc., it is a pain. I'd like to upload just 1 file, the large one, then let the site reduce it to needed sizes, and I'd like the flexibility to change the thumbnail and list sizes depending on user preferences, form factor (e.g. mobile, iPad, desktop), etc. so might need many copies of the same image. My question is should the image be reduced then saved several times upon upload and if so what is a good storage/naming convention? The other idea is to store just the single image but resize it programmatically before serving it to the client. Has anybody done this and what are the tradeoffs besides a few more machine cycles? How do you pass a temporary image in memory to the client (there is no URL)?

    Read the article

  • Gathering application architecture

    - by userbb
    Suppose there is system for gathering info about system activities. There is a client part with an interface and there are agent parts that are installed on each machine. I estimate that there could be max 20 computers now. Later could be more like 50. My solutions: Agent stores data into local database e.g. sqlite. There is also a service which can be used by a client to query data. So if a client wants to display data for 50 computers, he sends a query to 50 computers. I'am on that solution now but maybe it's totally wrong. Agent stores data into local database (I don't known good one for that). There is also server (main database) and local databases are synchronized with the server. In this case, a client connects to the main database to display data. Agent sends data in realtime to main database. So same as point 2, but there is no sync. Like in point 3, but agent buffers data in local database and sends it in small chunks to main database. What is the best approach?

    Read the article

  • Oracle OpenWorld Recap - A Walk in the Clouds (and heat in San Francisco)!

    - by Di Seghposs
    Whether you were one of the 50,000 attendees in San Francisco or one of the million+ online attendees – we’d like to thank you for joining us at Oracle OpenWorld last week! With temperatures in the 80s and 90s, attendees traveled the overheated streets to join packed keynotes and general sessions – all to find the information they came in search of – Oracle solutions to address their business requirements and challenges. The buzz of this year’s OpenWorld was all about ‘The Cloud’. And, the financial management team joined in the cloud buzz with Thomas Kurian’s keynote which highlighted our ERP Cloud Service as the most complete cloud service on the market. Offering the full breadth of business operations, including Financial Management, Risk and Control Management, Project Portfolio Management, Procurement, Sourcing, and Inventory Management, Oracle ERP Cloud Service transforms the back office into a collaborative, efficient, and intuitive hub. And, our product marketing expert on Financial Management, Annette Melatti, provided a glimpse of what the office of finance looks like in the 21st century as well as shared what’s next for Oracle’s financial solutions discussing the future of Financial Management with Fusion Financials, E-Business Suite, PeopleSoft and the JD Edwards solutions. There were over 120 sessions from customers, partners, and Oracle experts that addressed financial management solutions along with demo pods and Meet the Experts sessions. We hope you found what you were looking for! Missed any of the keynotes or general sessions? Watch them on demand here. At OpenWorld, we also announced that Lending Club, the leading platform for investing in and obtaining personal loans, has selected Oracle ERP Cloud Service to help improve decision-making, implement robust reporting, and take advantage of the cost savings provided by the cloud. The CFO of Lending Club, Carrie Dolan had mentioned that they “are an innovative, data-intensive, high-growth company and needed a solution and partner that could match us. We conducted a thorough review of our options, and Oracle ERP Cloud Service was the clear winner in terms of capabilities and business value as well as commitment to us as a customer.” Read the entire release here. For now, it’s back to business as we gear up for the second half of our fiscal year and start planning for Oracle OpenWorld 2013!

    Read the article

  • ADF - Now with Robots!

    - by Duncan Mills
    I mentioned this briefly in a tweet the other day, just before the full rush of OOW really kicked off, so I though it was worth re-visiting. Check out this video, and then read on: So why so interesting? Well - you probably guessed from the title, ADF is involved. Indeed this is as about as far from the traditional ADF data entry application as you can get. Instead of a database at the back-end there's basically a robot. That's right, this remarkable tape drive is controlled through an ADF using all your usual friends of ADF Faces, Controller and Binding (but no ADFBC for obvious reasons). ADF is used both on the touch screen you see on the front of the device in the video, and also for the remote management console which provides a visual representation of the slots and drives. The latter uses ADF's Active Data Framework to provide a real-time view of what's going on the rack. . What's even more interesting (for the techno-geeks) is the fact that all of this is running out of flash storage on a ridiculously small form factor with tiny processor - I probably shouldn't reveal the actual specs but take my word for it, don't complain about the capabilities of your laptop ever again! This is a project that I've been personally involved in and I'm pumped to see such a good result and,  I have to say, those hardware guys are great to work with (and have way better toys on their desks than we do). More info in the SL150 (should you feel the urge to own one) is here. 

    Read the article

  • Should this be written in C or php?

    - by user1867842
    This is my code; it speaks for itself on what I'm trying to do. <?php define("html","<html>"); define("htmlEnd","</html>"); etc... etc... ?> What I'm trying to do is make a wrapper for html's tags so they won't be needed anymore. But I can't get any of the attributes for html elements to be defined in PHP. This again speaks for itself; I don't know any other way of saying this. I guess how would I make another mark-up language like HTML without any tags but still keep everything about HTML is what I'm trying to say. My idea is for preventing XSS. For example, creating a special framework for the website itself that way there is no way any malicious attacker can guess because they know the HTML or PHP. I just don't want to make my website or something, and then my website gets hacked. Or if I make a website for someone and the website gets hacked. I am going to look like a unprofessional web developer. And what if I never get a job again.

    Read the article

  • Implementing unit testing at a company that doesn't do it

    - by Pete
    My company's head of software development just "resigned" (i.e. fired) and we are now looking into improving the development practices at our company. We want to implement unit testing in all software created from here on out. Feedback from the developers is this: We know testing is valuable But, you are always changing the specs so it'd be a waste of time And, your deadlines are so tight we don't have enough time to test anyway Feedback from the CEO is this: I would like our company to have automated testing, but I don't know how to make it happen We don't have time to write large specification documents How do developers get the specs now? Word of mouth or PowerPoint slide. Obviously, that's a big problem. My suggestion is this: Let's also give the developers a set of test data and unit tests That's the spec. It's up to management to be clear and quantitative about what it wants. The developers can put it whatever other functionality they feel is needed and it need not be covered by tests Well, if you've ever been in a company that was in this situation, how did you solve the problem? Does this approach seem reasonable?

    Read the article

  • What is the most concise, unambiguous syntax for operator associated methods (for overloading etc.) that doesn't pollute the namespace?

    - by Doug Treadwell
    Python tends to add double underscores before its built-in or overloadable operator methods, like __add(), whereas C++ requires declaring overloaded operators as operator + (Thing& thing) { /* code */ } for example. Personally I like the operator syntax because it seems to be more explicit and keeps these operator overloading methods separated from other methods without introducing weird prefix notation. What are your thoughts? Also, what about the case of built-in methods that are needed for the programming language to work properly? Is name mangling (like adding __ prefix or sys or something) the best solution here? What do you think about having another type of method declaration, like ... "system method" for lack of creativity at the moment. So there would be two kinds of declarations: int method_name() { ... } system int method_name() { ... } ... and the call would need to be different to distinguish between them. obj.method_name(); vs obj:method_name(); perhaps, assuming a language where : can be unambiguously used in this situation. obj.method_name() vs obj.(system method_name)() Sure, the latter is ugly, but the idea is to make the common case simple and system stuff should be kept out of the way. Maybe the Objective-C notation of method calls? [obj method_name]? Are there more alternatives? Please make suggestions.

    Read the article

  • QueryUnit 0.0.0.8 – Trust No One

    - by Davide Mauri
    Yesterday I’ve release an updated version of QueryUnit, the version 0.0.0.8. QueryUnit now supports AreNotEqual, Greater, and Less assertions and is more capable of managing strings results. I must say that I cannot live anymore without a proper Unit Testing of a BI solution. Just yesterday happened that one of the unit tests at a customer site failed showing a subtle situation where the release of a new version of custom application would have corrupted the source of BI data with a very low chance that someone would have noticed it before several days. It may happen when you have more the 15 systems that handles the data needed by your BI solution. The key message of this situation is “Trust No One”: if your data hasn’t passed quality testing it’s not trustable. Period. QueryUnit is now officialy an hero :) No superpowers still, but useful above all. http://queryunit.codeplex.com/ Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Can't get utouch to show available wifi networks

    - by kellrobinson
    I have ubuntu touch installed on a 2014 Nexus 7. Swiping down from the wireless symbol reveals a "Network" menu with the choices Flight Mode, Wifi Settings, and Cellular Settings. Wifi Settings leads to another menu: Previous Networks, and Other Networks. Previous Networks shows a list of networks used in the past; Other Networks opens an empty box for typing in the name of a network. I don't see any way to show a list of available networks detected by the device. On rare occasions, swiping down the wireless symbol actually does bring up a list of detected networks. But most of the time ubuntu touch exhibits the behavior described above, with no apparent way to bring up the list of available wireless networks. I would like to see a list of the availble networks, if there is a way to do so. Edit: The wifi menu works properly now. Just needed a couple of reboots, it seems. I have other problems, though. If these other problems persist I will make a post specific to them. This device is a 2013 Nexus 7 4G. Not sure how to find the ubuntu version. Can't navigate the settings menu right now because it got stuck and there's no way to go back, except to reboot(!) I'll open multirom manager or boot into recovery and look for the information there.

    Read the article

  • Case studies for successful service (project) based software development businesses without constant overtime from its employees [closed]

    - by Ryan Taylor
    I work for an IT company that is primarily services (project) based rather than product based. All software engineers are salaried. The company has set new expectations that everyone should work 48 hours per week instead of 40. Note, this isn't occasional overtime due to crunches. This is the new 40. The reasoning is that this enables the company to provide benefits to its employees such as monetary incentives and training because the company is more profitable. more hours worked = more billable hours = larger profit I understand the need for profitability and the occasional crunch time and have put in the extra hours when it was needed and beneficial to the project. However, I am also very sensitive to work life balance and have raised my concerns about the the new expectation. My employer is open to other methods to increase profitability so I hold hope that we can turn things around before it becomes a horrible place to work. How does a services based company become more profitable without increasing the number of hours expected from it's salaried employees? Are there any case studies showing the pros and cons of consistent overtime? Are there any case studies for a successful service based business model (for software development companies) that does not require consistent overtime from its employees?

    Read the article

  • Is there an open source version check library and web app?

    - by user52485
    I'm a developer for a cross platform (Win, MacOS, Linux) open source C++ application. I would like to have the program occasionally check for the latest version from our web site. Between the security, privacy, and cross platform network issues, I'd rather not roll our own solution. It seems like this is a common enough thing that there 'ought' to be a library/app which will do this. Unfortunately, the searches I've tried come up empty. Ideally, the web app would track requests and process the logs into some nice reports (number of users, what version, what platform, frequency of use, maybe even geographical info from IP address, etc.). While appropriately respecting privacy, etc. What pre-existing tools can help solve this problem? Edits: I am looking for a reporting tool, not a dependency checker. Our project has the challenge of keeping up with our users. Most do not join the mailing list. Our project has not been picked up by major distributions -- most of our users are Windows/MacOS anyway. When a new version comes out, we have no way of informing our users of its existence. Development is moving pretty fast, major features added every few months. We would like to provide the user with a way to check for an updated version. While we're at it, we would like to use these requests for some simple & anonymous usage tracking (X users running version Y with Z frequency, etc.). We do not need/want something that auto-updates or tracks dependencies on the system. We are not currently worried about update size -- when the user chooses to update, we expect them to download the complete latest version. We would like to keep this as simple as possible.

    Read the article

  • Allowing client to select data to return via REST interface

    - by CMP
    I have a rest service that is essentially a proxy to a variety of other services. So if I call GET /users/{id} It will get their user profile, as well as order history, and contact info, etc... all from various services, and aggregates them into one nice object. My problem is that each call to a different service has the potential to add time to the original request, so we would rather not get ALL the data ALL of the time if a particular client does not care about all of the pieces. A solution I have arrived at is to do something like this: GET /users/{id}?includeOrders=true&includeX=true&includeY=true... That works, and it allow me to do only what I need to, but it is cumbersome. We have added enough different data sources that there are too many parameters for that style to be useful. I could do something similar with a single integer and a bitmask or something, but that only makes it harder to read, and it does not feel very Restful. I could break it down into multiple calls so they would need to call /users/{id}/orders and /users/{id}/profile separately, but that sort of defeats the purpose of an aggregating proxy, who's purpose is to make clients jobs easier. Are there any good patterns that can help me return just enough data for each client, without making it too difficult for them to filter and select what they want?

    Read the article

  • How can I better manage far-reaching changes in my code?

    - by neuviemeporte
    In my work (writing scientific software in C++), I often get asked by the people who use the software to get their work done to add some functionality or change the way things are done and organized right now. Most of the time this is just a matter of adding a new class or a function and applying some glue to do the job, but from time to time, a seemingly simple change turns out to have far-reaching consequences that require me to redesign a substantial amount of existing code, which takes a lot of time and effort, and is difficult to evaluate in terms of time required. I don't think it has as much to do with inter-dependence of modules, as with changing requirements (admittedly, on a smaller scale). To provide an example, I was thinking about the recently-added multi-user functionality in Android. I don't know whether they planned to introduce it from the very beginning, but assuming they didn't, it seems hard to predict all the areas that will be affected by the change (apps preferences, themes, need to store account info somehow, etc...?), even though the concept seems simple enough, and the code is well-organized. How do you deal with such situations? Do you just jump in to code and then sort out the cruft later like I do? Or do you do a detailed analysis beforehand of what will be affected, what needs to be updated and how, and what has to be rewritten? If so, what tools (if any) and approaches do you use?

    Read the article

  • ubuntu 13.10 liveboot/installer loads dots, then does nothing

    - by user200245
    Problem: When I load an Ubuntu 13.10 (64-bit) live USB / installer on my laptop, the purple Ubuntu screen with the white dots appears, the dots turn orange (indicating its loading), then after a few seconds the screen turns completely black and nothing else happens. I can not install Ubuntu 13.10 on this computer. What I've Tried: I've re-downloaded the .iso file from canonical. I've burned the .iso to the USB using the default linux usb image writer, and the Windows programs YUMI and Linux Live USB Creator; same thing happens each time. Yes, I'm sure my computer is 64 bit. I'm currently running Linux Mint 15 on it, which runs perfectly. It's a Sager NP7330 / Clevo w230st. extra info A few months I installed Ubuntu 13.04 on this machine, which installed perfectly. Normally I'd install that then dist-upgrade to 13.10, HOWEVER this computer is only a few months old and the wireless/network drivers for my network card were not implemented until kernel 3.11 (which comes w ith 13.10). I tried manually downloading kernel 3.11 and installing it (on 13.04), however the wireless card nor ethernet card still did work with Ubuntu 13.04. So my only real hope is to get 13.10 working. Does anyone know what's up with this?

    Read the article

  • Coping with build order requirements in automated builds

    - by Derecho
    I have three Scala packages being built as separate sbt projects in separate repos with a dependency graph like this: M---->D ^ ^ | | +--+--+ ^ | S S is a service. M is a set of message classes shared between S and another service. D is a DAL used by S and the other service, and some of its model appears in the shared messages. If I make a breaking change to all three, and push them up to my Git repo, a build of S will be kicked off in Jenkins. The build will only be successful if, when S is pushed, M and D have already been pushed. Otherwise, Jenkins will find it doesn't have the right dependent package versions available. Even pushing them simultaneously wouldn't be enough -- the dependencies would have to be built and published before the dependent job was even started. Making the jobs dependent in Jenkins isn't enough, because that would just cause the previous version to be built, resulting in an artifact that doesn't have the needed version. Is there a way to set things up so that I don't have to remember to push things in the right order? The only way I can see it working is if there was a way that a build could go into a pending state if its dependencies weren't available yet. I feel like there's a simple solution I'm missing. Surely people deal with this a lot?

    Read the article

  • how to solve this problem

    - by Surbir
    root@me-desktop:~# sudo apt-get install aircrack-ng Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: aircrack-ng 0 upgraded, 1 newly installed, 0 to remove and 446 not upgraded. 1 not fully installed or removed. Need to get 1,579kB of archives. After this operation, 2,843kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu/ maverick/universe aircrack-ng i386 1:1.1-1 [1,579kB] Fetched 1,579kB in 1min 9s (22.7kB/s) Selecting previously deselected package aircrack-ng. (Reading database ... 520739 files and directories currently installed.) Unpacking aircrack-ng (from .../aircrack-ng_1%3a1.1-1_i386.deb) ... Processing triggers for man-db ... Setting up linux-image-3.0.1-030001-generic (3.0.1-030001.201108060905) ... Running depmod. update-initramfs: Generating /boot/initrd.img-3.0.1-030001-generic Warning: No support for locale: en_US.utf8 Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/dkms 3.0.1-030001-generic /boot/vmlinuz-3.0.1-030001-generic run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.0.1-030001-generic /boot/vmlinuz-3.0.1-030001-generic run-parts: executing /etc/kernel/postinst.d/nvidia-common 3.0.1-030001-generic /boot/vmlinuz-3.0.1-030001-generic run-parts: executing /etc/kernel/postinst.d/pm-utils 3.0.1-030001-generic /boot/vmlinuz-3.0.1-030001-generic run-parts: executing /etc/kernel/postinst.d/update-notifier 3.0.1-030001-generic /boot/vmlinuz-3.0.1-030001-generic run-parts: executing /etc/kernel/postinst.d/zz-update-grub 3.0.1-030001-generic /boot/vmlinuz-3.0.1-030001-generic exec: 15: update-grub: not found run-parts: /etc/kernel/postinst.d/zz-update-grub exited with return code 2 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.0.1-030001-generic.postinst line 1010. dpkg: error processing linux-image-3.0.1-030001-generic (--configure): subprocess installed post-installation script returned error exit status 2 Setting up aircrack-ng (1:1.1-1) ... Errors were encountered while processing: linux-image-3.0.1-030001-generic E: Sub-process /usr/bin/dpkg returned an error code (1) root@me-desktop:~#

    Read the article

  • Co-worker renamed all of my queries

    - by anon
    I don't know if I should be very irritated or what. I single handedly built over 300 queries for a large database, and developed a naming convention so I could find them later. No one else in my office even knows how to build a query, but I came in yesterday to find that all of them had been renamed. I am now having a very hard time finding things, and I am trying to figure out what to do. I spoke with the person responsible, and she just downplayed the whole thing. She said she renamed them so she can find them more easily. Unfortunately, I am the only one who knows how to build, edit, and maintain them, and the only reason she needed to find them was to test the queries. The new naming convention doesn't make sense at all, and I feel like we have taken a backwards step in the development process. What I'm trying to figure out is: 1) Am I overreacting? 2) What is the best way to handle this? I hate to mention this to my boss, but after speaking with my co-worker yesterday, I can already tell she feels like she did nothing wrong.

    Read the article

  • Video Bug after a fresh installation

    - by Matan
    Hello, I just installed Ubuntu 10.10 (I'm brand new to Ubuntu) on my laptop. I seem to have a video bug that I don't know how to deal with. When the log-in screen comes up, the boxes are way off in the corner of the screen (partially off it). When I enter my password, the screen goes black for a few seconds, then returns to the login screen. I can open a Terminal window and enter my login info that way. When I go back to Gnome (Ctrl+Alt+F7 or whatever) it shows me as "logged in" but I still can't get to the desktop. If anyone has any advice, I'd love to hear it--just try to use simple language, please, since I really don't know Linux at all yet! I'm running an Averatec 3700 Series: Mobile AMD Sempron 3000+ 512 MB DDR, 80 GB HDD After looking at this question I tried going in through Failsafe mode (took me a while to figure out the hold-shift-while-booting thing _<) and playing around with the resolution. Setting a somewhat wider resolution did seem to fix things so that I can log into regular GNOME, I think. I'm not sure if this fix will persist, but it seems like it might!

    Read the article

  • What version was installed? x64 or i686? What's the difference exactly?

    - by Seppo
    Okay, so heres my problem. I recently started migrating several services to individual VMs on my box, using VirtualBox 4.1. I created a new VirtualBox VM with guest type "Ubuntu (64 Bit)". I've already done this before and it worked like a charm. I then installed unbutu server (12.04) from the exact same dvd image. All the time I thought that it should have installed x64. I already put a few hours work into the new VM, migrating the webserver and mail system etc. Today I tried installing a x64 piece of software and it suddenly told me that it needed x64 and I had only i686. I checked uname -a and this is what it gave me: Linux hostname 3.2.0-29-generic-pae #46-Ubuntu SMP Fri Jul 27 17:25:43 UTC 2012 i686 i686 i386 GNU/Linux Any guesses what went wrong? All the time I was thinking I had a x64 system. Any way to move to a "real" x64? I have a second VM on this host which is running x64 just fine .. P.S.: grep --color=always -iw lm /proc/cpuinfo returns lm among the flags.

    Read the article

  • What makes a theme "Premium"

    - by Sinthia V
    I have a lot of time invested in creating Wordpress templates. I want to release combinations of these templates along with different styles and Fancy Front pages as "Premium Wordpress Themes". What I need to know is what does "Premium" mean? What do people expect of a GPL theme vs. a Premium theme? Are there features that are considered required to be premium? Are there features that are in demand but considered "exceptional" i.e. not part of every premium theme? How can I tell the difference? I have heard tounge-in-cheek answers that say that any theme that makes money is premium, but I mean to ask about what gives an outstanding theme it's quality. Why is it worth more? I am technically able to do many things, but as a lone developer with a family to feed, I can't afford to spend time on features that no one cares about. I have to try to isolate the things that people want. This is serious food and rent to me. How can I get this kind of info so I can make my project successful?

    Read the article

  • So You Want To Build a SPARC Cloud

    - by user12601629
    Did you ever wish you could get the industrial strength power of UNIX/RISC with the flexibility of cloud computing?  Well, now you can!  With recent advances from Oracle it's possible to build an incredibly high-performance, flexible, available virtualized infrastructure based on Solaris and SPARC.  Here's the recipe! Authored in collaboration across the Oracle "Systems Group" team, we now have a complete best practice guide for you.  Click below to download it: Best Practices for Building a Virtualized SPARC Computing Environment Inside you'll find recommendations for how and when to leverage technologies like: SPARC T4 OVM for SPARC hypervisor (version 2.2 and newer) Solaris 11 Ops Center 12c ZFS Storage Appliance Oracle network switches Based on following these best practices, you'll be able to construct a dynamic, virtualized infrastructure that allows for: Easy, GUI-based provisioning on new VMs Automated HA failover in the event of physical server failures Automatic load balancing across a cluster of VM hosts Complete end-to-end monitoring You should download this paper and check it out.  Even if you aren't planning on buying all new hardware, and instead want to transform some existing gear into a dynamic virtualized environment then this paper will give you concrete info on what to do and the trade-offs you'll make. Have fun getting started on your journey to build a SPARC cloud!

    Read the article

  • Latest (5 June 14) Updates t0 10.04 Causing Multiple Problems

    - by user291780
    Apologies, the questions are very short, but the bkg isn't. I rec'd a routine notification from the update mgr a few days ago (I believe, June 5th). I took a look and there was lots of linux stuff, headers, etc., nothing obviously unusual. I'd rec'd and updated w/a more extensive pkg set, kernel and all a few weeks ago, no problem. On June 6, I pushed the upgrade button on the June 5th batch, nothing usual, needed a reboot, which I did after a full power down, it came up fine. Gedit worked, the calculator worked, started up firefox, it came up, selected the BookMarks menu, and blam, it hesitated and then greyed out, when I tried to close it, got the "process not responding" msg. Undaunted, I tried to fire up Google Chrome....nothing on the screen or process bar. I fired up the system monitor and indeed there were some "sleeping" chrome processes "running". Powered down several times, but the same problems persist. Similar but worse story when I tried to fire up one of my virtual machines, VirtualBox came up fine, but when I tried to start one of my virtual machines I got a progress popup that I'd never seen before which showed that we were making no progress past 20%. Uninstalled Oracle VirtualBox, reinstalled the latest and greatest, same result. Also, unable to logout, or shutdown once the virtual machine exhibited this behavior. Powered down manually, end of story. Never saw such a bad result after an update. I'm running Ubuntu 10.04 LTS (Lucid Lynx) as I have been for a number of years. Please don't reply with why don't you run some other version of Ubuntu, that doesn't answer the questions below. Questions: Will their be a subsequent update that fixes this, and if so, when? If not, is there a way for me to get back to where I was before this disaster?

    Read the article

< Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >