Search Results

Search found 1128 results on 46 pages for 'sees'.

Page 7/46 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • using web proxies - safe to enter passwords?

    - by bergin
    Hi Wanted to check something on a local site and see how the outside world sees it. however, using a web proxy im not sure that when i enter my credentials the proxy wont record this and give the proxy owner access to my site. is there another way to see my own site as though I was on the other side?

    Read the article

  • Using mod_rewrite to hide tomcat port

    - by user123181
    I have apps on Tomcat that use URLs like this: http://xxx:8080/myapp I don't want the users to see the port in the URL. Hi can do a rewrite rule like this: RewriteRule ^/myapp(.*) http://xxx:8080/myapp$1 [P,L] This way, if a user goes to the URL http://xxx/myapp he can enter the app fine, but the port will still show up on the browser. I want the URL that the user sees to be always http://xxx/myapp How can I do this using mod_rewrite?

    Read the article

  • Cron Job that Boots Screen at Start

    - by Pez Cuckow
    I am trying to set up a number of processes that start during boot (servers for games) with the below command as the cron item: @reboot /usr/bin/screen -fa -d -m -S NAME COMMAND However if the server crashes for what ever reason screen closes and the server doesn't get a chance to run it's auto restart (as far as I understand; screen sees no processes in the socket and so closes). Is there a way that I can get around this so screen will sit there even if nothing is running in it?

    Read the article

  • Problem installing CanonMF5880dn

    - by Paul
    Just got a CanonMF5880dn and cannot print to it from Suse 11.1 MacBook prints w/o issue ping 192.168.1.103 no problem cups sees it as Canon MF5880/MF5840 PCL at URI socket://192.168.1.103:9100 cups test print appears to submit and complete job but no action from printer Yast also seems to install printer correctly CQue2 also seems to install printer correctly all attempts to print yield same results: Suse indicates job processed correctly and completely but no printing happens. firewall is off http://192.168.1.103 in FF gives me the printer config menus correctly What have I failed to do?

    Read the article

  • Is there a way to replicate a very large file shares in real-time?

    - by fsckin
    I have an hourly cron job that copies about 40GB of data from a source folder into a new folder with the hour appended on the end. When it's done, the job prunes anything older than 24 hours. This data changes very often during work hours and is on a samba file share. Here's how the folder structure looks: \server\Version.1 \server\Version.2 \server\Version.3 ... \server\Version.24 The contents of each new folder compared to the last one usually doesn't change very much, since this is a hourly job. Now you might be thinking that I'm an idiot for setting dreaming this up. Truth is, I just found out. It's actually been used for years and is so incredibly simple, anyone could delete the ENTIRE 40GB share (imagine that dialog spooling up... deleting thousands and thousands of files) and it would actually be faster to restore by moving the latest copy back to the source than it took to delete. Brilliant! Now to top this off, I need to efficiently replicate this 960GB of "mostly similar" data to a remote server over WAN link, with the replication happening as close to real-time as possible -- think hot spare, disaster recovery, etc. My first thought was rsync. Total failure. Rsync sees it sees a deletion of the folder that is 24 hours old and the addition of a new folder with 30GB of data to sync! I also looked at rdiff-backup and unison, they both appear to use similar algorithms and do not keep enough meta-data to do this intelligently. Best thing that I can find "out of the box" to do this is Windows Server "Distributed Filesystem Replication" which uses "Remote Differential Compression" -- After reading the background information on how this works, it actually looks like exactly what I need. Problem: Both servers are running Linux. D'oh! One approach to this I'm looking at is this, say it's 5AM and the cron job finishes: New Version.5 folder arrives at on local server SSH to remote server and copy Version.4 to Version.5 Run rsync on the local server pushing changes to the remote server. Rsync finally knows to do a differential copy between Version.4 and Version.5 Is there a smarter way to replicate Samba shares as close to real-time as possible? Anything out there that does "Remote Differential Compression" on Linux?

    Read the article

  • WLAN adapter on Ubuntu Server inside Hyper-V

    - by Firefox333
    I need to set up an Ubuntu server as a router. However we need to make it wireless and wired. I need a WLAN adapter for the wireless part of the router. I get my Internet connection on my server through my wireless adapter from my host but it automatically sees it as an Ethernet adapter instead of as a wireless adapter. Is there any way of making a (virtual) wireless adapter on Ubuntu server 12.04 inside a hyper-v machine?

    Read the article

  • Ask filesystem if it is mounted

    - by Brian
    How can I see if a (ext3) filesystem is mounted by asking the filesystem directly (i.e. the same way that the system does when it boots and sees that it was not unmounted cleanly)? Checking the output of mount is no good because the filesystem might be mounted by a virtual machine. I know I can run fsck and it will abort if the filesystem is mounted, but I don't need to actually check the filesystem.

    Read the article

  • Vizio costar mediatomb video loads but does not play

    - by jeremyjjbrown
    I setup MediaTomb on 12.04 to stream videos using these inst The Vizio costar I am using as the player sees the server, and starts to load the MP4 video. The video however does not play, the player just exits after a few seconds. There is no mention of errors in the media tomb log. Update- I installed VLC to test the server and it works fine. Perhaps I need a different video server for the costar. I also tested the video by playing it via usb thumbdrive.

    Read the article

  • JPA/EclipseLink multitenancy screencast

    - by alexismp
    I find JPA and in particular EclipseLink 2.3 to be particularly well suited to illustrate the concept of multitenancy, one of the key PaaS features en route for Java EE 7. Here's a short (5-minute) screencast showing GlassFish 3.1.1 (due out real soon now) and its EclipseLink 2.3 JPA provider showing multitenancy in action. In short, it adds EclipseLink annotations to a JPA entity and deploys two identical applications with different tenant-id properties defined in the persistence.xml descriptor. Each application only sees its own data, yet everything is stored in the same table which was augmented with a discriminator column. For more advanced uses such as tenant property being set on the @PersistenceContext, XML configuration of multitenant JPA entities, and more check out the nicely written wiki page.

    Read the article

  • RT3290 Bluetooth not pairing in Ubuntu 14.04

    - by Nashhole
    I recently followed the instructions listed in the following link to get my RT3290 bluetooth working on my laptop. These instructions have yielded the most progress I have had in the year I have had this laptop. My machine now sees my bluetooth, I can scan for and see devices, and other devices and see my laptop, but pairing continually fails. Ralink RT 3290 Bluetooth Problem on Ubuntu 14.04 -lscpi reads 04:00.1 Bluetooth: Ralink corp. RT3290 Bluetooth -rfkill list reads 0: hci0: Bluetooth Soft blocked: no Hard blocked: no -dmesg | grep Blue reads [ 5.965811] Bluetooth: Core ver 2.17 [ 5.965833] Bluetooth: HCI device and connection manager initialized [ 5.965840] Bluetooth: HCI socket layer initialized [ 5.965842] Bluetooth: L2CAP socket layer initialized [ 5.965847] Bluetooth: SCO socket layer initialized [ 6.038085] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 6.038088] Bluetooth: BNEP filters: protocol multicast [ 6.038096] Bluetooth: BNEP socket layer initialized [ 6.058013] Bluetooth: RFCOMM TTY layer initialized [ 6.058024] Bluetooth: RFCOMM socket layer initialized [ 6.058029] Bluetooth: RFCOMM ver 1.11 Any one have any thoughts or ideas I could try? Thanks in advance for your time and assistance.

    Read the article

  • If my team has low skill, should I lower the skill of my code?

    - by Florian Margaine
    For example, there is a common snippet in JS to get a default value: function f(x) { x = x || 10; } This kind of snippet is not easily understood by all the members of my team, their JS level being low. Should I not use this trick then? It makes the code less readable by peers, but more readable than the following according to any JS dev: function f(x) { if (!x) { x = 10; } } Sure, if I use this trick and a colleague sees it, then they can learn something. But the case is often that they see this as "trying to be clever". So, should I lower the level of my code if my teammates have a lower level than me?

    Read the article

  • C#/.NET Little Wonders: The ConcurrentDictionary

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In this series of posts, we will discuss how the concurrent collections have been developed to help alleviate these multi-threading concerns.  Last week’s post began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  Today's post discusses the ConcurrentDictionary<T> (originally I had intended to discuss ConcurrentBag this week as well, but ConcurrentDictionary had enough information to create a very full post on its own!).  Finally next week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. Recap As you'll recall from the previous post, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  While these were convenient because you didn't have to worry about writing your own synchronization logic, they were a bit too finely grained and if you needed to perform multiple operations under one lock, the automatic synchronization didn't buy much. With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  This cuts both ways in that you have a lot more control as a developer over when and how fine-grained you want to synchronize, but on the other hand if you just want simple synchronization it creates more work. With .NET 4.0, we get the best of both worlds in generic collections.  A new breed of collections was born called the concurrent collections in the System.Collections.Concurrent namespace.  These amazing collections are fine-tuned to have best overall performance for situations requiring concurrent access.  They are not meant to replace the generic collections, but to simply be an alternative to creating your own locking mechanisms. Among those concurrent collections were the ConcurrentStack<T> and ConcurrentQueue<T> which provide classic LIFO and FIFO collections with a concurrent twist.  As we saw, some of the traditional methods that required calls to be made in a certain order (like checking for not IsEmpty before calling Pop()) were replaced in favor of an umbrella operation that combined both under one lock (like TryPop()). Now, let's take a look at the next in our series of concurrent collections!For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentDictionary – the fully thread-safe dictionary The ConcurrentDictionary<TKey,TValue> is the thread-safe counterpart to the generic Dictionary<TKey, TValue> collection.  Obviously, both are designed for quick – O(1) – lookups of data based on a key.  If you think of algorithms where you need lightning fast lookups of data and don’t care whether the data is maintained in any particular ordering or not, the unsorted dictionaries are generally the best way to go. Note: as a side note, there are sorted implementations of IDictionary, namely SortedDictionary and SortedList which are stored as an ordered tree and a ordered list respectively.  While these are not as fast as the non-sorted dictionaries – they are O(log2 n) – they are a great combination of both speed and ordering -- and still greatly outperform a linear search. Now, once again keep in mind that if all you need to do is load a collection once and then allow multi-threaded reading you do not need any locking.  Examples of this tend to be situations where you load a lookup or translation table once at program start, then keep it in memory for read-only reference.  In such cases locking is completely non-productive. However, most of the time when we need a concurrent dictionary we are interleaving both reads and updates.  This is where the ConcurrentDictionary really shines!  It achieves its thread-safety with no common lock to improve efficiency.  It actually uses a series of locks to provide concurrent updates, and has lockless reads!  This means that the ConcurrentDictionary gets even more efficient the higher the ratio of reads-to-writes you have. ConcurrentDictionary and Dictionary differences For the most part, the ConcurrentDictionary<TKey,TValue> behaves like it’s Dictionary<TKey,TValue> counterpart with a few differences.  Some notable examples of which are: Add() does not exist in the concurrent dictionary. This means you must use TryAdd(), AddOrUpdate(), or GetOrAdd().  It also means that you can’t use a collection initializer with the concurrent dictionary. TryAdd() replaced Add() to attempt atomic, safe adds. Because Add() only succeeds if the item doesn’t already exist, we need an atomic operation to check if the item exists, and if not add it while still under an atomic lock. TryUpdate() was added to attempt atomic, safe updates. If we want to update an item, we must make sure it exists first and that the original value is what we expected it to be.  If all these are true, we can update the item under one atomic step. TryRemove() was added to attempt atomic, safe removes. To safely attempt to remove a value we need to see if the key exists first, this checks for existence and removes under an atomic lock. AddOrUpdate() was added to attempt an thread-safe “upsert”. There are many times where you want to insert into a dictionary if the key doesn’t exist, or update the value if it does.  This allows you to make a thread-safe add-or-update. GetOrAdd() was added to attempt an thread-safe query/insert. Sometimes, you want to query for whether an item exists in the cache, and if it doesn’t insert a starting value for it.  This allows you to get the value if it exists and insert if not. Count, Keys, Values properties take a snapshot of the dictionary. Accessing these properties may interfere with add and update performance and should be used with caution. ToArray() returns a static snapshot of the dictionary. That is, the dictionary is locked, and then copied to an array as a O(n) operation.  GetEnumerator() is thread-safe and efficient, but allows dirty reads. Because reads require no locking, you can safely iterate over the contents of the dictionary.  The only downside is that, depending on timing, you may get dirty reads. Dirty reads during iteration The last point on GetEnumerator() bears some explanation.  Picture a scenario in which you call GetEnumerator() (or iterate using a foreach, etc.) and then, during that iteration the dictionary gets updated.  This may not sound like a big deal, but it can lead to inconsistent results if used incorrectly.  The problem is that items you already iterated over that are updated a split second after don’t show the update, but items that you iterate over that were updated a split second before do show the update.  Thus you may get a combination of items that are “stale” because you iterated before the update, and “fresh” because they were updated after GetEnumerator() but before the iteration reached them. Let’s illustrate with an example, let’s say you load up a concurrent dictionary like this: 1: // load up a dictionary. 2: var dictionary = new ConcurrentDictionary<string, int>(); 3:  4: dictionary["A"] = 1; 5: dictionary["B"] = 2; 6: dictionary["C"] = 3; 7: dictionary["D"] = 4; 8: dictionary["E"] = 5; 9: dictionary["F"] = 6; Then you have one task (using the wonderful TPL!) to iterate using dirty reads: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); And one task to attempt updates in a separate thread (probably): 1: // attempt updates in a separate thread 2: var updateTask = new Task(() => 3: { 4: // iterates, and updates the value by one 5: foreach (var pair in dictionary) 6: { 7: dictionary[pair.Key] = pair.Value + 1; 8: } 9: }); Now that we’ve done this, we can fire up both tasks and wait for them to complete: 1: // start both tasks 2: updateTask.Start(); 3: iterationTask.Start(); 4:  5: // wait for both to complete. 6: Task.WaitAll(updateTask, iterationTask); Now, if I you didn’t know about the dirty reads, you may have expected to see the iteration before the updates (such as A:1, B:2, C:3, D:4, E:5, F:6).  However, because the reads are dirty, we will quite possibly get a combination of some updated, some original.  My own run netted this result: 1: F:6 2: E:6 3: D:5 4: C:4 5: B:3 6: A:2 Note that, of course, iteration is not in order because ConcurrentDictionary, like Dictionary, is unordered.  Also note that both E and F show the value 6.  This is because the output task reached F before the update, but the updates for the rest of the items occurred before their output (probably because console output is very slow, comparatively). If we want to always guarantee that we will get a consistent snapshot to iterate over (that is, at the point we ask for it we see precisely what is in the dictionary and no subsequent updates during iteration), we should iterate over a call to ToArray() instead: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary.ToArray()) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); The atomic Try…() methods As you can imagine TryAdd() and TryRemove() have few surprises.  Both first check the existence of the item to determine if it can be added or removed based on whether or not the key currently exists in the dictionary: 1: // try add attempts an add and returns false if it already exists 2: if (dictionary.TryAdd("G", 7)) 3: Console.WriteLine("G did not exist, now inserted with 7"); 4: else 5: Console.WriteLine("G already existed, insert failed."); TryRemove() also has the virtue of returning the value portion of the removed entry matching the given key: 1: // attempt to remove the value, if it exists it is removed and the original is returned 2: int removedValue; 3: if (dictionary.TryRemove("C", out removedValue)) 4: Console.WriteLine("Removed C and its value was " + removedValue); 5: else 6: Console.WriteLine("C did not exist, remove failed."); Now TryUpdate() is an interesting creature.  You might think from it’s name that TryUpdate() first checks for an item’s existence, and then updates if the item exists, otherwise it returns false.  Well, note quite... It turns out when you call TryUpdate() on a concurrent dictionary, you pass it not only the new value you want it to have, but also the value you expected it to have before the update.  If the item exists in the dictionary, and it has the value you expected, it will update it to the new value atomically and return true.  If the item is not in the dictionary or does not have the value you expected, it is not modified and false is returned. 1: // attempt to update the value, if it exists and if it has the expected original value 2: if (dictionary.TryUpdate("G", 42, 7)) 3: Console.WriteLine("G existed and was 7, now it's 42."); 4: else 5: Console.WriteLine("G either didn't exist, or wasn't 7."); The composite Add methods The ConcurrentDictionary also has composite add methods that can be used to perform updates and gets, with an add if the item is not existing at the time of the update or get. The first of these, AddOrUpdate(), allows you to add a new item to the dictionary if it doesn’t exist, or update the existing item if it does.  For example, let’s say you are creating a dictionary of counts of stock ticker symbols you’ve subscribed to from a market data feed: 1: public sealed class SubscriptionManager 2: { 3: private readonly ConcurrentDictionary<string, int> _subscriptions = new ConcurrentDictionary<string, int>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public void AddSubscription(string tickerKey) 7: { 8: // add a new subscription with count of 1, or update existing count by 1 if exists 9: var resultCount = _subscriptions.AddOrUpdate(tickerKey, 1, (symbol, count) => count + 1); 10:  11: // now check the result to see if we just incremented the count, or inserted first count 12: if (resultCount == 1) 13: { 14: // subscribe to symbol... 15: } 16: } 17: } Notice the update value factory Func delegate.  If the key does not exist in the dictionary, the add value is used (in this case 1 representing the first subscription for this symbol), but if the key already exists, it passes the key and current value to the update delegate which computes the new value to be stored in the dictionary.  The return result of this operation is the value used (in our case: 1 if added, existing value + 1 if updated). Likewise, the GetOrAdd() allows you to attempt to retrieve a value from the dictionary, and if the value does not currently exist in the dictionary it will insert a value.  This can be handy in cases where perhaps you wish to cache data, and thus you would query the cache to see if the item exists, and if it doesn’t you would put the item into the cache for the first time: 1: public sealed class PriceCache 2: { 3: private readonly ConcurrentDictionary<string, double> _cache = new ConcurrentDictionary<string, double>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public double QueryPrice(string tickerKey) 7: { 8: // check for the price in the cache, if it doesn't exist it will call the delegate to create value. 9: return _cache.GetOrAdd(tickerKey, symbol => GetCurrentPrice(symbol)); 10: } 11:  12: private double GetCurrentPrice(string tickerKey) 13: { 14: // do code to calculate actual true price. 15: } 16: } There are other variations of these two methods which vary whether a value is provided or a factory delegate, but otherwise they work much the same. Oddities with the composite Add methods The AddOrUpdate() and GetOrAdd() methods are totally thread-safe, on this you may rely, but they are not atomic.  It is important to note that the methods that use delegates execute those delegates outside of the lock.  This was done intentionally so that a user delegate (of which the ConcurrentDictionary has no control of course) does not take too long and lock out other threads. This is not necessarily an issue, per se, but it is something you must consider in your design.  The main thing to consider is that your delegate may get called to generate an item, but that item may not be the one returned!  Consider this scenario: A calls GetOrAdd and sees that the key does not currently exist, so it calls the delegate.  Now thread B also calls GetOrAdd and also sees that the key does not currently exist, and for whatever reason in this race condition it’s delegate completes first and it adds its new value to the dictionary.  Now A is done and goes to get the lock, and now sees that the item now exists.  In this case even though it called the delegate to create the item, it will pitch it because an item arrived between the time it attempted to create one and it attempted to add it. Let’s illustrate, assume this totally contrived example program which has a dictionary of char to int.  And in this dictionary we want to store a char and it’s ordinal (that is, A = 1, B = 2, etc).  So for our value generator, we will simply increment the previous value in a thread-safe way (perhaps using Interlocked): 1: public static class Program 2: { 3: private static int _nextNumber = 0; 4:  5: // the holder of the char to ordinal 6: private static ConcurrentDictionary<char, int> _dictionary 7: = new ConcurrentDictionary<char, int>(); 8:  9: // get the next id value 10: public static int NextId 11: { 12: get { return Interlocked.Increment(ref _nextNumber); } 13: } Then, we add a method that will perform our insert: 1: public static void Inserter() 2: { 3: for (int i = 0; i < 26; i++) 4: { 5: _dictionary.GetOrAdd((char)('A' + i), key => NextId); 6: } 7: } Finally, we run our test by starting two tasks to do this work and get the results… 1: public static void Main() 2: { 3: // 3 tasks attempting to get/insert 4: var tasks = new List<Task> 5: { 6: new Task(Inserter), 7: new Task(Inserter) 8: }; 9:  10: tasks.ForEach(t => t.Start()); 11: Task.WaitAll(tasks.ToArray()); 12:  13: foreach (var pair in _dictionary.OrderBy(p => p.Key)) 14: { 15: Console.WriteLine(pair.Key + ":" + pair.Value); 16: } 17: } If you run this with only one task, you get the expected A:1, B:2, ..., Z:26.  But running this in parallel you will get something a bit more complex.  My run netted these results: 1: A:1 2: B:3 3: C:4 4: D:5 5: E:6 6: F:7 7: G:8 8: H:9 9: I:10 10: J:11 11: K:12 12: L:13 13: M:14 14: N:15 15: O:16 16: P:17 17: Q:18 18: R:19 19: S:20 20: T:21 21: U:22 22: V:23 23: W:24 24: X:25 25: Y:26 26: Z:27 Notice that B is 3?  This is most likely because both threads attempted to call GetOrAdd() at roughly the same time and both saw that B did not exist, thus they both called the generator and one thread got back 2 and the other got back 3.  However, only one of those threads can get the lock at a time for the actual insert, and thus the one that generated the 3 won and the 3 was inserted and the 2 got discarded.  This is why on these methods your factory delegates should be careful not to have any logic that would be unsafe if the value they generate will be pitched in favor of another item generated at roughly the same time.  As such, it is probably a good idea to keep those generators as stateless as possible. Summary The ConcurrentDictionary is a very efficient and thread-safe version of the Dictionary generic collection.  It has all the benefits of type-safety that it’s generic collection counterpart does, and in addition is extremely efficient especially when there are more reads than writes concurrently. Tweet Technorati Tags: C#, .NET, Concurrent Collections, Collections, Little Wonders, Black Rabbit Coder,James Michael Hare

    Read the article

  • Virus that makes all files and folders read-only filesystem on a usb drive

    - by ren florento
    Is there any way on how to remove a virus from Windows that makes the files and folders and the usb drive itself a read-only filesystem as this is an annoying one because the virus keeps copying itself as long as it sees a folder and keeps running which prevents you from creating and deleting files and folders from the usb drive and makes " mount -o remount,rw '/path' " ineffective ? btw i'm not really sure if it is a virus but what makes me think that it is a virus is for the reason the it creates a .exe file within every folder which was named after folder and it also immediately reverts to read-only filesystem which locks the files and folders even after executing the command " mount -o remount,rw '/path' ". i also think the virus is just running only within the usb drive as it is not affecting the folders on ubuntu. I could choose to reformat the usb drive as it only contains few important files but what concerns me is if such virus or whatever you may call it gets into my backup drives that contains many important files.Thanks for any help and advice you could give.

    Read the article

  • adding hard drive failed

    - by dennis ditch
    i was using von welch's instructions at http://v2kblog.blogspot.com/2007/05/adding-second-hard-drive.html] to install a 500 gig seagate drive to write the recordings to. everything seemed to be going ok until mkfs /dev/sdb1 then we get an error message mkfs.ext2: inode_size (128) * inodes_count (0) too big for a filesystem with 0 blocks, specify a higher inode_ratio (-i) or lower inode count (-N) my son is trying to help me but this is beyond him. our knowledge of unix/linux is very limited. at work the support people just sent me a line by line cook book. i would appreciate any help you can give us. the computer is a gateway mdp e4000 . mythbuntu is installed on a pata drive and we are adding a sata drive for the second drive. the bios sees the drive.

    Read the article

  • Grand Theft Auto IV – Awesome Ghost Rider Mod [Videos]

    - by Asian Angel
    Recently we shared the video for a terrific Back to the Future GTA IV mod with you and today we are back with videos for a wicked Ghost Rider mod. One thing is sure, with Ghost Rider cruising through town the nights in Liberty City have never been hotter! Note: Videos contain some language that may be considered inappropriate. The first video focuses on the main working mod while the second focuses on the new ‘Wall Ride’ feature that sees Ghost Rider going up and down walls. How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • When to detect collisions in game loop

    - by Ciaran
    My game loop uses a fixed time step to do "physics" updates, say every 20 ms. In here I move objects. I draw frames as frequently as possible. I work out a value between 0 and 1 to represent the proportion of the physics tick that is complete and interpolate between the previous and current physics state before drawing. It results in a smoother game assuming the frame rate is higher than the physics update rate. I am currently doing the collision detection in the physics update routine. I was wondering should it instead take place in the interpolated draw routine where the positions match what the user sees? Collisions can result in explosions by the way.

    Read the article

  • Platform Builder: Disable the USB Driver Dialog

    - by Bruce Eitman
    For a long time, Windows CE developers and users have wanted to disable the USB Driver Dialog that is displayed when an unknown USB device is plugged into the host controller.   Of course the question is always why would you want to do such a thing? The simple answer is that there are USB devices that are needed, like printers, which expose multiple functions to the bus, like scanners and faxes, which no Windows CE driver exists to support.   So the printer quietly loads a driver, but then the other functions cause a dialog to be shown. One solution is to create a USB Class driver that loads by default if no other driver has been loaded. This driver just accepts anything that it sees and then does nothing with it. Starting with the Windows Embedded CE 6.0 R3 March QFE/update, the USB 2.0 driver has a registry value to disable the dialog: [HKEY_LOCAL_MACHINE\Drivers\USB\LoadClients]       "DoNotPromptUser"=dword:0   Setting the DoNotPromptUser value to 1 disables the dialog. The default value is zero, so the driver continues to behave in the same way it always did unless you change this registry value.     Copyright © 2010 – Bruce Eitman All Rights Reserved

    Read the article

  • Using External Monitor with Laptop Monitor as separate monitors

    - by user14623
    I am trying to use 32"LCD monitor with my ubuntu 10.10 installation. I am trying to use my laptop screen and external monitor at the same time but having two separate desktops. I also want to use my laptop with 1280x800 resolution and external one with 1920 x 1080 using VGA. However, ubuntu sees my external LCD as a CRT and provide 13....x... resolution at the best not above. My graphics card driver is Nvidia 270.... driver. Is ubuntu capable of using two monitors separately or should i give up? Thank you

    Read the article

  • GDL Presents: Women Techmakers with Diane Greene

    GDL Presents: Women Techmakers with Diane Greene Megan Smith co-hosts with Cloud Platform PM Lead Jessie Jiang. They will be exploring former VMWare CEO and current Google, Inc. board member Diane Greene's strategic thoughts about Cloud on a high-level, as well as the direction in which she sees the tech industry for women. Hosts: Megan Smith - Vice President, Google [x] | Jessie Jiang - Product Management Lead, Google Cloud Platform Guest: Diane Greene - Board of Directors, Google, Inc. From: GoogleDevelopers Views: 0 0 ratings Time: 01:00:00 More in Science & Technology

    Read the article

  • How to config multiple monitor with optimus?

    - by irrational
    I have an Acer Aspire 8951G running 12.04 Pangolin with bumblebee working beautifully. My problem is that when I connect either the VGA port or the HDMI to my projector there is no way I can see to properly set up the resolutions or colours. The default basic display driver sees the projector correctly, but messes up colours and resolutions (on hdmi) and resolutions on VGA. (Its a 1280 X 720 projector) Am I missing some sort of Xorg configuration? nvidia-xconfig does not seem to exist and running optirun nvidia-settings -c :8 opens the settings, but of course only for the one display. I just want a way to set a default config for my projector via VGA or preferably HDMI. Any help would be wonderful.

    Read the article

  • Ubuntu One Windows Client 3.0.1 - Sync not connecting

    - by Tweezak
    I've got sync working at home on Natty and I need to access files at work. I've just installed client 3.0.1 at work on Windows 7. It's finding my account and sees what devices are already set up for sync but it just repeatedly tries to sync (File sync starting...) and after a while fails (File Sync is disconnected.). This cycle loops endlessly. I'm suspicious that it's a problem with my proxy setup. My employer doesn't use a manual proxy configuration but instead uses an automatic configuration script at a specific http address. Does U1 recognize that kind of setup or is it only looking for a proxy server in the form of an address/port?

    Read the article

  • How do I get a Netgear WNDA3100V2 working?

    - by Michal
    I have Ubuntu 11.10 on my desktop. A month ago I bought Linksys AE1000 adapter,I did not check that it's not working on Ubuntu and because I've lost receipt I'm stuck with it. Last week I bought Netgear adapter and this time I did check and it meant to be plug an play but it was not. I have checked many forums and managed to install software, system does sees adapter but it's not connecting to network. I have found that it may not like WPA so I have created my own password-letters and digits,no spaces-still nothing.I don't understand why. This is my next attempt with Linux and I'm not with IT background so it takes time and research before I can resolve something but I really want to learn. I so wish to learn on Ubuntu.One day, I've checked Fedora16 and my old Linksys AE1000 worked without any instalations.

    Read the article

  • Purple screen on boot, iMac

    - by Eugene B
    I have just installed Ubuntu 13.10 (special iMac iso found here) on the new iMac (dual boot). Installation of rEFIt was completed successfully, as well as the installation of Ubuntu itself. After the final reboot, rEFIt sees this distributive and allows the choice. When I select "Boot linux from HD", it sends me to grub screen, where I can select Ubuntu. And then it gets stuck on the purple screen (smpboot: Booting Node 0, Processors #1 -- for the recovery mode) with no further action. Does anybody know a solution to this problem? P.S.: I have also tried both 32 and 64-bit pc distributives (occasionally) with the same result.

    Read the article

  • How to explain to non-technical person why the task will take much longer then they think?

    - by Mag20
    Almost every developer has to answer questions from business side like: Why is going to take 2 days to add this simple contact form? When developer estimates this task, they may divide it into steps: make some changes to Database optimize DB changes for speed add front end HTML write server side code add validation add client side javascript use unit tests make sure SEO is setup is working implement email confirmation refactor and optimize the code for speed ... These maybe hard to explain to non-technical person, who basically sees the whole task as just putting together some HTML and creating a table to store the data. To them it could be 2 hours MAX. So is there a better way to explain why the estimate is high to non-developer?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >