Search Results

Search found 16032 results on 642 pages for 'everyday problems'.

Page 293/642 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • Unable to mount smb share. "Please select another viewer and try again". Please help. Serious smb/nautilus foo needed

    - by oznah
    This don't think this is the typical, "I can't mount a windows share" post. I am using stock Ubuntu 12.04. I am pretty sure this is a Nautilus issue, but I have reached a dead end. I have one share that I can't mount using smb://server/share via nautilus. I get the following error. Error: Failed to mount Windows share Please select another viewer and try again I can mount this share from other machines(non-ubuntu) using the same credentials so I know I have perms on the destination share. I can mount other shares on other servers from my Ubuntu box so I am pretty sure I have all the smb packages I need on my Ubuntu box. To make thing more interesting, if I use smbclient from the command line, I mount this share with no problems from my Ubuntu box. So here's what we know: destination share perms are ok (no problem accessing from other machines) smb is setup correctly on Ubuntu box (access other windows shares no problem) I only get the error when using nautilus smbclient in terminal works, no problem Any help would be greatly appreciated. Googling turned up simple mount/perms issues, and I don't think that is what is going on here. Let me know if you need more information. Hugh

    Read the article

  • fix vmware workstation 9 installation in ubuntu 12.10

    - by Alessandro Belloni
    i have opened this thread because i upgraded to ubuntu 12.10 beta (kernel 3.5) and i have problem with vmware workstation 9: "Unable to change virtual machine power state: Cannot find a valid peer process to connect to" does anyone got the same problem? Clean install of ubuntu 12.10 (daily build) installed vmware 9 and patched but not working, my laptop is a Lenovo T420 with Nvidia Optimus Technology. i can't patch correctly and get the thinks be builded correctly, my configuration is ubuntu 12.10 fresh installation vmware workstation 9 fresh install on top of a lenovo thikpad t420 with nvidia optimus video card. have a problems.. this message is show whem i try to apply the patch.. # Stopping VMware services: VMware Authentication Daemon done At least one instance of VMware VMX is still running. Please stop all running instances of VMware VMX first. VMware Authentication Daemon done Unable to stop services # How can i stop the vmware services to apply the patch? This message is too show when i try to patch again # ./patch-modules_3.5.0.sh /usr/lib/vmware/modules/source/.patched found. You have already patched your sources. Exiting # But the vmware is not working, and i can’t unstall…

    Read the article

  • Kanji characters appear as boxes

    - by s3d10s
    i'm having trouble with the display of japanese characters on Windows 8 Pro (English, 64-bit, updated regularly). They appear as boxes (picture) in windows explorer, windows menus (even in the language settings of control panel), iTunes and nearly everywhere else, besides web browsers. I was using windows 7 until now, and it didn't have any of these problems, and i'm using the same applications now, as i was using in windows 7. Sometimes (!) when i restart the machine the problem goes away, but that isn't a real solution. What i've tried so far: added japanese language to windows of course installed/uninstalled/reinstalled japanase language pack (didn't have any impact though) i've read in one of these superuser topics a possible solution, when i had to create a txt file on the desktop with a kanji in the filename - that also didn't work (but i honestly hope that hacking and tweaking an operation system released in 2012 can't be the solution to display kanjis..) Please give me any ideas, i'm a bit hopeless here, and don't want to spend my life installing operation systems..

    Read the article

  • App_offline.htm and SharePoint and wholly contained images&hellip;

    - by Shawn Cicoria
    The question came up today if we could use an “app_offline.htm” file along with HTML in that file that would reference images. First, I wasn’t 100% sure if the app_offline.htm would work, but it sure did.  Since it’s just the Asp.net hosting process that detects the file, it circumvents loading any HttpApplications (SharePoint) beyond just serving up the HTML content. The second question was about having something more than text, specifically <img> tags.  So, since the HttpHandlers are taking all requests for all resources through the Asp.net pipeline, as soon as the app_offline.htm file is there, nothing else will get served from within that web application.    Sure, we could host the file (images) outside the web app, but what fun would that be? So, I found this link on using an image in app_offline.htm http://pbodev.wordpress.com/2009/12/20/app_offline-htm-with-an-image-yes-we-can/ Turns out, the src tag (in fact many tags) can take a stream of data represented by a mime type and base64 encoding inline – such as: <img style="height:515px;width:700px;border-width:0px;" src="data:image/jpeg;base64,/9j/4AAQSkZJRgA   One of the problems we had was the image was too large; so, sliced up the image, but ended up with spaces between each of the slices.  Low and behold, it works with CSS as well.   <style type="text/css"> .Slice_1_jpg { position: absolute; left:0px; top:0px; width:1011px; height:148px; background: url("data:image/jpeg;base64,/9j/4AAQSkZ   And the body is as follows: <body> <div class="Slice_1_jpg"> </div>   For this, I wrote a little Asp.Net site that, using a file upload control, will generate the necessary contents of what needs to go in the “data” value for the stream.  A link to the code is here: http://cicoria.com/downloads/CreateBase64OfImage.zip

    Read the article

  • Aptana Ext code completion in .php files

    - by Frederik Wordenskjold
    I'm having problems getting the code-completion for the Ext 3.2 plugin to work, when working with .php files. I've also installed the php plugin for Aptana, and the same thing applies for php - I cannot access php code-completion when working with a html-file, so it seems like a general issue... It's also not possible to write Ext in .js files, which is weird... The latter case does make sense of course. But I should be able to write both php and Ext in .php files! Is this possible in any way? I have of course tested the code-completion for php in a .php file, which works. The same applies for Ext code in .html documents!

    Read the article

  • Using the link command to keep backups on another drive

    - by Xavier
    I have a folder that contains a not so large amount of space called /data/backup. I have been told that if I link that folder (/data/backup) to an even bigger folder area like /bigdata/backup for example, that I will be able to execute backups to the /data/backup folder. It will then just create a link, but the data will be seen in both folders and the latter one (/bigdata/backup) will contain the backup results but it will show on both folders. Since the /bigdata/backup has far more disk space then the backup will no longer fail because of space problems in the /data/backup one. Is this true?

    Read the article

  • Local Live Quicktime Video Broadcast, latency?

    - by Snowwire
    I'm looking into the feasibility of using a local server to distribute live video of a conference to delegates in the same room. They would still hear the live audio coming from the speaker, so only the video would be streamed. I was considering a Darwin Steaming Server (a lot of iPhone users to support) and encoding with H.264. My main concern is latency across the network. Even with everything running locally, would there be lip sync issues between the live audio and the 'live' video stream? It feels like there will be problems given the encoding, broadcasting, decoding to be completed, but I've never done any like this before so thought I would check. Thanks

    Read the article

  • I just moved, have no internet on my laptop or ipod, but everyone else who lives here does

    - by Kay
    I just moved to Thunder Bay and my laptop as well as my ipod cannot connect to the internet. My laptop allows me to write the password for the wifi, but I still have no internet connection. When I try to use the cable, the computer tells me I have a perfect connection, even the icon shows that it's working, but I can't open any web pages or use any internet functions. When I try to use MSN it sends me to a troubleshooting option and informs me that there's some kind of problem with the "gateway". I have unplugged the modem and the router and plugged them back in, this did not help. I am living in a home where all the people are using wifi on the same system as me, and no one has ever had any problems. Back home both my laptop and my ipod worked without a problem both in my home, as well as on campus. Since this problem seems to be limited only to me, it would indicate that there's a problem on my end -- with my laptop. However, in that case my ipod would be working. It has never failed to connect before. Any suggestions would be much appreciated.

    Read the article

  • How do you keep cool when production system goes down?

    - by Mag20
    This has happened to most of us... You come to work one day. Everything seems normal: the sun is shining, birds are chirping, but you notice a couple of weird things on your way to work like deja vu with cat in matrix. You get into office, there are a lot of phones ringing, but could be that they are just doing a new sales promotion. You settle in, when you notice a dark cloud hovering over you. It takes you a couple of moments, but you recognize the cloud is your boss. Usually he checks on you every morning with his "Soooo Peeeeter, how about those TCP/IP reports?" routine, but today he forgot everything about common manners and rudely invaded your personal space. No "Good Morning", just some drooling, grunts and curses. He reminds you a bit of neanderthal who is trying to get away from cyber tooth tiger, fear and panic all compressed in a tight ball. You try to decipher the new language that he created since yesterday and you start understanding that something bad happened overnight - production system went down. Now, your system is usually used by clients during regular working hours from 9-5, but for whatever reason you didn't get any alerts on your beeper (for people under 30 - beeper was like a mobile phone that could only ring and tell you who beeped you). Need to remember to charge it next time. So it is 8:45am, the system MUST be up at 9am. Every 10 seconds, your boss lets out yet another curse which communicates to you that another customer is having problems getting into the system. Also several account managers are now hovering over your boss trying to make him understand how clients are REALLY REALLY suffering. Everyone is depending on you to get the system up ASAP and at the same time hinder your progress by constantly distracting you. How do you keep cool in a situation like this?

    Read the article

  • Yum install problem on netsolvps server

    - by Thomas
    I tried to install yum based on http://www.wallpaperama.com/forums/how-to-install-yum-problems-installing-on-linux-redhat-fedora-commands-t471.html. my current python is 2.4.6, red hot fedora core 6, 4.1.1-28, RPM 4.4.2. I tried with the yum 3.0.1 version for my current server configuration. There is no configure file on 3.0.1 so i used make install command. All files complied without error. If I run #rpm -q yum it says "package yum is not installed". Whats the problem here on the yum installation?

    Read the article

  • Giving PHP the permission to make a git pull request

    - by Bernd
    Hello, I would like to allow PHP to execute a Git pull command. But there are some problems with the user and permissions. How did you solve the problem? PHP runs as user www-data. Therefore I've changed the .git directory owner/group to www-data (chown www-data:www-data -R .git). As it is turned out later www-data has no SSH keys. Is it a good idea to give it one? If yes where to place? Or is it possible to allow it to use a specific key. Best regards, Bernd

    Read the article

  • Adobe Plugin in Firefox showing grey screen; something to do with range requests on Apache?

    - by Sam Minnée
    I have a web page with a link to a PDF file (target="_blank"). If I click the link, the PDF reader just shows a grey screen within the Firefox browser. If I copy that link and manually open it in a new tab, the PDF will display correctly, and subsequent requests made by clicking the original link now work, suggesting that the problem occurs when loading the file into the cache. It appears as though the Adobe PDF reader plugin is making byte-range requests (I see lots of 206 responses) and I suspect that this may be the cause of the issue. I am running an Apache webserver. Has anyone had problems with Apache and Adobe's byte-range requests? Are there any workarounds?

    Read the article

  • Explorer forgetting how to cut and paste files

    - by raimesh
    My Windows XP (SP3) machine (at work) occasionally "forgets" how to cut-and-paste files in Explorer - if I cut or copy a file then go to paste it elsewhere, the "paste" and "paste shortcut" menu items are greyed out. The keyboard shortcuts (Ctrl+X/Ctrl+C,Ctrl+V) don't work either, and neither does Drag-and-Drop - clicking and dragging a file/folder doesn't change the pointer and letting go doesn't drop the item anywhere. Restarting the explorer.exe process usually fixes the problem, but sometimes I need to do a full reboot. Note: this may be related to the following existing question, although that's Vista and it sounds like the asker has more permanent problems than I do: http://superuser.com/questions/27045/copy-paste-in-vista-explorer-broken-not-ms-vpc So, any idea what might be causing the problem and how to avoid it?

    Read the article

  • Reboot failure after upgrade from 8.04 LTS to 10.04 LTS

    - by Alan Fietz
    I bought our computer from Freegeeks with Ubuntu 8.04 installed. I upgraded from Ubuntu 8.04 to 10.04 on Thursday November 10. I have an ASUS P4P800SE with dual Intel P4@3GHZ. Installation messages were: - Error loading Nautilus config info - Replaced customied /etc/login.defs - Replaced customized /etc/dhcp3/dhclient.conf - 189 packages removed - WARNING: Failed to read mirror file When I rebooted, the usual ASUS screen appeared, then "Loading GRUB" then "starting Up..." then "starting Up..." again then a blank screen (the moniter went dormant). I rebooted, started GRUB and selected: version 10.04.3 LTS kernel 2.6.32-35 generic I got the same results. I rebooted, started GRUB and selected: kernel 2.6.24-29 generic Here's what was displayed: udevd [875]: error getting socket: Invalid argument libudev:udev_monitor_new_from_netlink: error getting socket: Invalid argument Segmentation fault **Gave up waiting for root device** Common problems - Boot args (cat/proc/cmdline) - Check root delay - check root - Missing modules (cat/pro/modules; **Alert! /dev/disk/by_vvid/c59c6361 etc... does not exist. Dropping to a shell.** Then Busybox v1.13.3 started with the following prompt (?) (initramfs) _ But my typing did not appear on the screen. It appears the hard drive cannot be found. Any suggestion on how to remedy this? Thank you.

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • How can I reinstall QoS Packet Scheduler if it was removed from the winxp installation by nLite?

    - by Irwin1138
    I have a WinXP SP3 installation modified by nLite. This particular installation was stripped off the QoS Packet Scheduler. I was advised to remove QoS because of the overhead it produces or something like that. Now, I read this lifehacker post about windows maintenance, and it says that on the contrary, by doing so I may have done more harm than good: Disabling QoS in Windows XP: Rumor had it that Microsoft had permanently tied up 20 percent of your net bandwidth for Windows Update. They didn't, and those who disable QoS, or IPv6, in XP actually end up with some pretty harsh connectivity problems. I tend to believe this, and now I seek a way to reinstall QoS. I tried to install it by going to network adapter properties - install - service, but there is no QoS there. I have the original, untouched WinXP SP3 cd. So, is there a way to bring back QoS into my WinXP installation, preferably without reinstalling windows from scratch?

    Read the article

  • Giving PHP the permission to make a git pull request

    - by Bernd
    I would like to allow PHP to execute a Git pull command. But there are some problems with the user and permissions. How did you solve the problem? PHP runs as user www-data. Therefore I've changed the .git directory owner/group to www-data (chown www-data:www-data -R .git). As it is turned out later www-data has no SSH keys. Is it a good idea to give it one? If yes where to place? Or is it possible to allow it to use a specific key. Best regards, Bernd

    Read the article

  • Why is Rhythmbox becoming the default (again)?

    - by Christoph
    So, it seems with 12.04, they're switching back to Rhythmbox, after switching from Rhythmbox a year ago. I don't get why. They say that it's because of a blocking bug in GTK3# (if I understand that correctly), but that's just one bug, and in the same breath they say RB is not well maintained. It seems Ubuntu guys were dissatisfied with Banshee in some way, but apparently the Banshee guys were never notified of any problems. Also, it can't be to save disc space by dropping mono, because at the same day it was announced that the install disc will be enlarged by 50MB. Also, isn't it a bit shortsighted to push Banshee for default inclusion, and then drop it again a year later? How is that a sustainable use of dev resources, or consistent? Apparently there was quite some heavy effort by banshee devs - David Nielsen used the term "bending over backwards for Ubuntu" iirc. In summary: Can anyone shed more light on this? Related question: Why is Banshee becoming the default? Sources: http://www.omgubuntu.co.uk/2011/11/banshee-tomboy-and-mono-dropped-from-ubuntu-12-04-cd/ http://www.omgubuntu.co.uk/2011/11/rhythmbox-to-return-as-ubuntu-12-04-default-music-app/ http://www.omgubuntu.co.uk/2011/11/ubuntu-12-04-disc-size-to-be-750mb/ http://summit.ubuntu.com/uds-p/meeting/19442/desktop-p-default-apps/ http://banshee-media-player.2283330.n4.nabble.com/banshee-being-dropped-from-ubuntu-because-of-GTK3-support-td3985298.html

    Read the article

  • Which problem(s) do YOU want to see solved?

    - by buu700
    My team and I are meeting tonight to come up with a business plan and some community input would be amazing. I've been mulling over this issue for the past few months and bouncing ideas off of others, and now I'd finally like some input from the community. I have come up with a fair selection of ideas, but most of those amount to either fun projects which could potentially be profitable, or otherwise solid business models that have one or two major hurdles (usually related to resources or legality). For our team meeting tonight, my idea is to take inventory of our available skills, resources, and compelling problems which interest us. The last is where I would greatly appreciate some community input. Hell, even entire business ideas/plans would be appreciated. No matter how big or small your thoughts, any input would be appreciated. We're a team of computer scientists, so our business will be primarily based around software/technology/Web solutions. Among my relevant available resources (entire Internet aside), I have the following: A pretty reliable connection to an SEO company a large production company. A stash of fairly powerful server hardware. A fast network with static IPs. The backend for Hackswipe, which includes credit card payment processing and a Google Voice-based SMS gateway. This work in progress design for something completely unrelated but which is backed by some fairly decent infrastructure. Direct access to the experts in just about any relevant field (on-campus Carnegie Mellon professors). A sexual relationship with the baron of a small nation. For further down the line, some investor relationships. Not likely to be so relevant, but a decent social media presence (Stack Overflow reputation, modship in some major reddits, various tech forums). The source code for Eugene fucking McCabe. Pooled with the other team members, the list of projects we can build off of would be longer (including an Android app). So, what are your thoughts? Crossposted to reddit

    Read the article

  • How do you engineers keep your place clean?

    - by 280Z28
    I have a major problem keeping not only my desk, but my entire place clean. I end up spending all of my time behind the keyboard and slowly but surely it all turns to crap everywhere but between my monitors and I. It's causing some major problems IRL and I need to fix it. Question: I'm an engineer, and I eat, sleep, and breathe think like an engineer. Do you have any ideas for engineering a clean place? Note to the messy users out there: Not really interested. I've been messy for ages so I've got that covered. I'm looking for real suggestions from people who actually keep clean, preferably from some who've been in my shoes before.

    Read the article

  • What happens to remounted data/directories

    - by cauon
    According to suggestions in this post I am trying to improve my system to run better with a Solid State Drive. But regarding to RAMdisks and /etc/fstab usage I have some understanding problems coming up. So let's say I add the following lines to /etc/fstab tmpfs /tmp tmpfs defaults,noatime,nodiratime,mode=1777 0 0 tmpfs /var/spool tmpfs defaults,noatime,nodiratime,mode=1777 0 0 tmpfs /var/tmp tmpfs defaults,noatime,nodiratime,mode=1777 0 0 tmpfs /var/log tmpfs defaults,noatime,nodiratime,mode=0755 0 0 I know that on startup these locations should now get mounted into the RAM (hopefully). But what happens to the physical space that was mounted on those places before? Is it gone? Will it be back when I edit my /etc/fstab back to the Version without tmpfs? Will the space still be allocated on my SSD in a way that I can't use it for any other data? Sometimes it is suggested to add the following line, too: none /var/cache aufs dirs=/tmp:/var/cache=ro 0 0 What does this actually do? I noticed that /var/cache takes almost 1GB of space on my harddisk. So should i clear the directory before activating this line? (this is related to the former question) This causes me some confusions and I hope you can give me some clarifications. UPDATE I've downloaded a image with 600MB in size into /tmp that is mounted with the tmpfs settings above. Now I wanted to compare the RAM usage before and after the download. I expect the RAM usage to be increased by 600MB after the download. But the System Monitoring Tool showed me no changes at all. How can this be? Does tmpfs work other than I actually expect it to?

    Read the article

  • WIFI connection stopped working on Windows - how to troubleshoot

    - by Michiel de Mare
    Friends and family often come to me with this problem, and while I usually manage to fix their problems eventually, it often take me several hours. I'm sure there are many of us in the same position. Also, friends rarely seem to know which provider they're using, or what the password is of their WIFI-connection or ADSL-router. I'd like to gather that information myself. I'd like to have a checklist to diagnose and fix the problem, and a set of utilities on an usb-stick to help me. Where to start?

    Read the article

  • BizTalk: History of one project architecture

    - by Leonid Ganeline
    "In the beginning God made heaven and earth. Then he started to integrate." At the very start was the requirement: integrate two working systems. Small digging up: It was one system. It was good but IT guys want to change it to the new one, much better, chipper, more flexible, and more progressive in technologies, more suitable for the future, for the faster world and hungry competitors. One thing. One small, little thing. We cannot turn off the old system (call it A, because it was the first), turn on the new one (call it B, because it is second but not the last one). The A has a hundreds users all across a country, they must study B. A still has a lot nice custom features, home-made features that cannot disappear. These features have to be moved to the B and it is a long process, months and months of redevelopment. So, the decision was simple. Let’s move not jump, let’s both systems working side-by-side several months. In this time we could teach the users and move all custom A’s special functionality to B. That automatically means both systems should work side-by-side all these months and use the same data. Data in A and B must be in sync. That’s how the integration projects get birth. Moreover, the specific of the user tasks requires the both systems must be in sync in real-time. Nightly synchronization is not working, absolutely.   First draft The first draft seems simple. Both systems keep data in SQL databases. When data changes, the Create, Update, Delete operations performed on the data, and the sync process could be started. The obvious decision is to use triggers on tables. When we are talking about data, we are talking about several entities. For example, Orders and Items [in Orders]. We decided to use the BizTalk Server to synchronize systems. Why it was chosen is another story. Second draft   Let’s take an example how it works in more details. 1.       User creates a new entity in the A system. This fires an insert trigger on the entity table. Trigger has to pass the message “Entity created”. This message includes all attributes of the new entity, but I focused on the Id of this entity in the A system. Notation for this message is id.A. System A sends id.A to the BizTalk Server. 2.       BizTalk transforms id.A to the format of the system B. This is easiest part and I will not focus on this kind of transformations in the following text. The message on the picture is still id.A but it is in slightly different format, that’s why it is changing in color. BizTalk sends id.A to the system B. 3.       The system B creates the entity on its side. But it uses different id-s for entities, these id-s are id.B. System B saves id.A+id.B. System B sends the message id.A+id.B back to the BizTalk. 4.       BizTalk sends the message id.A+id.B to the system A. 5.       System A saves id.A+id.B. Why both id-s should be saved on both systems? It was one of the next requirements. Users of both systems have to know the systems are in sync or not in sync. Users working with the entity on the system A can see the id.B and use it to switch to the system B and work there with the copy of the same entity. The decision was to store the pairs of entity id-s on both sides. If there is only one id, the entities are not in sync yet (for the Create operation). Third draft Next problem was the reliability of the synchronization. The synchronizing process can be interrupted on each step, when message goes through the wires. It can be communication problem, timeout, temporary shutdown one of the systems, the second system cannot be synchronized by some internal reason. There were several potential problems that prevented from enclosing the whole synchronization process in one transaction. Decision was to restart the whole sync process if it was not finished (in case of the error). For this purpose was created an additional service. Let’s call it the Resync service. We still keep the id pairs in both systems, but only for the fast access not for the synchronization process. For the synchronizing these id-s now are kept in one main place, in the Resync service database. The Resync service keeps record as: ·       Id.A ·       Id.B ·       Entity.Type ·       Operation (Create, Update, Delete) ·       IsSyncStarted (true/false) ·       IsSyncFinished (true/false0 The example now looks like: 1.       System A creates id.A. id.A is saved on the A. Id.A is sent to the BizTalk. 2.       BizTalk sends id.A to the Resync and to the B. id.A is saved on the Resync. 3.       System B creates id.B. id.A+id.B are saved on the B. id.A+id.B are sent to the BizTalk. 4.       BizTalk sends id.A+id.B to the Resync and to the A. id.A+id.B are saved on the Resync. 5.       id.A+id.B are saved on the B. Resync changes the IsSyncStarted and IsSyncFinished flags accordingly. The Resync service implements three main methods: ·       Save (id.A, Entity.Type, Operation) ·       Save (id.A, id.B, Entity.Type, Operation) ·       Resync () Two Save() are used to save id-s to the service storage. See in the above example, in 2 and 4 steps. What about the Resync()? It is the method that finishes the interrupted synchronization processes. If Save() is started by the trigger event, the Resync() is working as an independent process. It periodically scans the Resync storage to find out “unfinished” records. Then it restarts the synchronization processes. It tries to synchronize them several times then gives up.     One more thing, both systems A and B must tolerate duplicates of one synchronizing process. Say on the step 3 the system B was not able to send id.A+id.B back. The Resync service must restart the synchronization process that will send the id.A to B second time. In this case system B must just send back again also created id.A+id.B pair without errors. That means “tolerate duplicates”. Fourth draft Next draft was created only because of the aesthetics. As it always happens, aesthetics gave significant performance gain to the whole system. First was the stupid question. Why do we need this additional service with special database? Can we just master the BizTalk to do something like this Resync() does? So the Resync orchestration is doing the same thing as the Resync service. It is started by the Id.A and finished by the id.A+id.B message. The first works as a Start message, the second works as a Finish message.     Here is a diagram the whole process without errors. It is pretty straightforward. The Resync orchestration is waiting for the Finish message specific period of time then resubmits the Id.A message. It resubmits the Id.A message specific number of times then gives up and gets suspended. It can be resubmitted then it starts the whole process again: waiting [, resubmitting [, get suspended]], finishing. Tuning up The Resync orchestration resubmits the id.A message with special “Resubmitted” flag. The subscription filter on the Resync orchestration includes predicate as (Resubmit_Flag != “Resubmitted”). That means only the first Sync orchestration starts the Resync orchestration. Other Sync orchestration instantiated by the resubmitting can finish this Resync orchestration but cannot start another instance of the Resync   Here is a diagram where system B was inaccessible for some period of time. The Resync orchestration resubmitted the id.A two times. Then system B got the response the id.A+id.B and this finished the Resync service execution. What is interesting about this, there were submitted several identical id.A messages and only one id.A+id.B message. Because of this, the system B and the Resync must tolerate the duplicate messages. We also told about this requirement for the system B. Now the same requirement is for the Resunc. Let’s assume the system B was very slow in the first response and the Resync service had time to resubmit two id.A messages. System B responded not, as it was in previous case, with one id.A+id.B but with two id.A+id.B messages. First of them finished the Resync execution for the id.A. What about the second id.A+id.B? Where it goes? So, we have to add one more internal requirement. The whole solution must tolerate many identical id.A+id.B messages. It is easy task with the BizTalk. I added the “SinkExtraMessages” subscriber (orchestration with one receive shape), that just get these messages and do nothing. Real design Real architecture is much more complex and interesting. In reality each system can submit several id.A almost simultaneously and completely unordered. There are not only the “Create entity” operation but the Update and Delete operations. And these operations relate each other. Say the Update operation after Delete means not the same as Update after Create. In reality there are entities related each other. Say the Order and Order Items. Change on one of it could start the series of the operations on another. Moreover, the system internals are the “black boxes” and we cannot predict the exact content and order of the operation series. It worth to say, I had to spend a time to manage the zombie message problems. The zombies are still here, but this is not a problem now. And this is another story. What is interesting in the last design? One orchestration works to help another to be more reliable. Why two orchestration design is more reliable, isn’t it something strange? The Synch orchestration takes all the message exchange between systems, here is the area where most of the errors could happen. The Resync orchestration sends and receives messages only within the BizTalk server. Is there another design? Sure. All Resync functionality could be implemented inside the Sync orchestration. Hey guys, some other ideas?

    Read the article

  • What is the proper way to install the 3.4 kernel?

    - by Marcelo Ruiz
    Kernel 3.2 has an annoying error for my wireless card (rtl8192se-b) that makes the connection drop and/or prevents the card to make a connection to the wireless router. Dealing with it was very frustrating until I found out the bug was corrected in 3.4. I downloaded: linux-headers-3.4.0-030400_3.4.0-030400.201205210521_all.deb linux-headers-3.4.0-030400_3.4.0-030400.201205210521_all.deb linux-image-3.4.0-030400-generic_3.4.0-030400.201205210521_amd64.deb and installed with: sudo dpkg -i * Now the wireless works fine, but I have two problems that cannot solve. The first one is minor: plymouth would not start at all. But if I boot with the 3.2 kernel it works fine. The second one is serious: sometimes the computer won't shut down or reboot. The X server terminates but the computer shows part of my grub background and will stay there forever using 100% of the CPU. I have a Toshiba Qosmio with an Core i7 and nvidia graphic card (using nvidia-current). During one shutdown, I briefly read a message that said that the virtualbox module couldn't be unloaded from the kernel. I tried to solve this by removing and purging virtualbox and installing it back. I don't see the message anymore, but sometimes the computer won't shutdown nor reboot. Am I missing something to properly configure the new kernels? Thanks!

    Read the article

  • Replacing a W2K3 Domain Controller - what do I need to know?

    - by Marko Carter
    I have a network of around 70 machines, currently with two DCs both running Windows Server 2003 (DC0 & DC1). DC0 is a five year old Poweredge 1850 and has recently become increasingly flakey, and in the past fortnight has fallen over twice. I want to replace this machine, but I'm cautious as there is huge scope for this sort of thing to go wrong. The way I imagine doing this is building a new machine then doing a DCPROMO and running three domain controllers for a month or so until I'm happy that everything is working as it should be before retiring the old machine. Particular areas of concern are the replication of roles from the current controllers (GP settings for instance) and the ramifications of switching off the machine that has, up until now, been the 'primary'. If there are compelling reasons to use Server 2008 I'm willing to do so, however I don't know if this would cause problems with my exisiting 2003 machines. Any advice on best practice or previous experiences would be most welcome.

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >