Search Results

Search found 44026 results on 1762 pages for 'raid question'.

Page 370/1762 | < Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >

  • Need help to install Ubuntu on Win7 Alienware M17xR3, installing not working

    - by Rodrigo
    Please I need help to install linux on my computer to start analyzing Next Generation Sequencing (molecular genetics) for my work. I've tried to install from all different ways and forms and it gave me different errors. First I will describe my system: Alienware M17x R3 Raid Drive 596GB (2x 298 GB) Windows7 primary partition C: The hardrive is divided in 3 primary partitions (can't do 4 don't know why) 1st: Fat16 no name capacity 40mbs almost not used (has files like command.com, dellbio.bin, dellrmk.bin and oobedone.flg) I dont know what this is for but is blocked and non system (Status:none). 2nd: Recovery NTFS 9GB (Status:system) and it seems to have boot files. 3rd: CD: Windos OS (status:boot). And I have 2 Logical drivers F: data for backup and games :D (300gb) G: For trying to install the Linux (Already tried with Fat32, Ext2, ext3 and ext4) Installing with Wubi on F: and G: Instalation looks good on windows ask to restart dos mode appears: try (HD0,0) non-ms:skip try (HD0,1) NTFS5: No Wibildr try (HD0,2) NTFS5: Error: "Prefix" is not set. Installing with Wubi on C: Installation looks good on windows ask to restart dos mode apears: Mount denied becouse NTFS volume is already exclusive opened, the volume may be already mounted or another software may use it which could be identified for example by the help of the 'FUSER' command Mount devdm-4 Could not find installation files /ubuntu/install custom-installation Run 'Chkdsk /r' on windows and restart...blablablalba... that should correct. done this didn't work. Tried installing with USB drive and after burning a CD with the Iso file download from the official website: I've tried the option to install along windows with a G: drive in FAT32, EXT2,3 and 4, I've tried to leave unallocated data, without the driver G at all... appears this everytime: http://i.stack.imgur.com/eo2Ie.png pres ok it ask me to: Not possible to install the boot loader at the location: specify new location: I've tried to select all the drivers non is accepted If I ask to continue without installing the boot loader he ask me to install later by myself which I'm trying to figure it out but I'm not really good to understand how ppl get to the programming part like a DOS to write down couse I'm a newb and I've dont work on dos or linux before... I've read that the boot loader has to be installed on the primary partition, but I'm not sure what I can delete from those 2 partitions that would let me get 1 extra... and it seems that the one that windows use is protected to write down so the software is not touching it... Please if someone have any idea to help me out... I've got a course tomorrow of how to use the software to analyze the data and I wanted to take my computer with me, so I've passed the last 3 days trying to search different ways to solve. My question is also post on the bugs.launchpad.net link here: lounchpad.net bug #882695

    Read the article

  • Managing OpenX campaigns for several advertisers

    - by Mauricio Scheffer
    We're running our own OpenX instance with several advertisers. Each month, the advertiser's campaign should reset its available impressions, based on whatever plan the advertiser paid. I think this is a pretty common scenario. Question is: how do you manage this? Do you expire the old campaign and create a new one with the new amount of impressions? Or do you just add the impressions to the existing campaign?

    Read the article

  • What Would a CyberWar Do To Your Business?

    - by [email protected]
    In mid-February the Bipartisan Policy Center in the United States hosted Cyber ShockWave, a simulation of how the country might respond to a catastrophic cyber event. An attack takes place, they can't isolate where it came from or who did it, simulated press reports and market impacts...and the participants in the exercise have to brief the President and advise him/her on what to do. Last week, Former Department of Homeland Security Secretary Michael Chertoff who participated in the exercise summarized his findings in Federal Computer Weekly. The article, given FCW's readership and the topic is obviously focused on the public sector and US Federal policies. However, it touches on some broader issues that impact the private sector as well--which are applicable to any government and country/region-- such as: · How would the US (or any) government collaborate to identify and defeat such an attack? Chertoff calls this out as a current gap. How do the public and private sector collaborate today? How would the massive and disparate collection of agencies and companies act together in a crunch? · What would the impact on industries and global economies be? Chertoff, and a companion article in Government Computer News, only touch briefly on the subject--focusing on the impact on capital markets. "There's no question this has a disastrous impact on the economy," said Stephen Friedman, former director of the National Economic Council under President George W. Bush who played the role of treasury secretary. "You have financial markets shut down at this point, ordinary transactions are dramatically depleted, there's no question that this has a major impact on consumer confidence." That Got Me Thinking · How would it impact Oracle's customers? I know they have business continuity plans--is this one of their scenarios? What if it's not? How would it impact manufacturing lines, ATM networks, customer call centers... · How would it impact me and the companies I rely on? The supermarket down the street, my Internet Service Provider, the service station where I bought gas last night. I sure don't have any answers, and neither do Chertoff or the participants in the exercise. "I have to tell you that ... we are operating in a bit of unchartered territory." said Jamie Gorelick, a former deputy attorney general who played the role of attorney general in the exercise. But it is a good thing that governments and businesses are considering this scenario and doing what they can to prevent it from happening.

    Read the article

  • Gedit drag and drop code

    - by blade19899
    I am using Gedit(3.4.1) and use it to edit my website with it. and usually when i change/remove/edit my website, i have to change all of my html pages to make it look the same, layout wise and, right now i am removing some code, and pasting another in it. My question is. can i have save that code as a snippet or such??? i want to be able to drop that code onto gedit, and possibly apply that to all over my opened HTML pages

    Read the article

  • How would I tell if a prospective client website is under DDoS attack?

    - by artlung
    I have a person asking me whether the DDoS Mitigation Service they're using is worth it. This is out of my expertise, but clearly at some point someone sold this service to the client. Assuming I don't have anything but a domain name, what information can I gather about whether they are indeed under attack and/or how well the DDoS Mitigation Service is working? Assume I don't have any administrative access to the site/server(s) in question.

    Read the article

  • Languages on embedded systems in aeronautic and spatial sector

    - by Niels
    I know that my question is very broad but a general answer would be nice. I would like to know which are the main languages used in aeronautic and spatial sector. I know that the OS which run on embedded systems are RTOS (Real time OS) and I think that, this languages must be checked correctly by different methods (formal methods, unit tests) and must permit a sure verification of whole process of a program.

    Read the article

  • Should UTF-16 be considered harmful?

    - by Artyom
    I'm going to ask what is probably quite a controversial question: "Should one of the most popular encodings, UTF-16, be considered harmful?" Why do I ask this question? How many programmers are aware of the fact that UTF-16 is actually a variable length encoding? By this I mean that there are code points that, represented as surrogate pairs, take more than one element. I know; lots of applications, frameworks and APIs use UTF-16, such as Java's String, C#'s String, Win32 APIs, Qt GUI libraries, the ICU Unicode library, etc. However, with all of that, there are lots of basic bugs in the processing of characters out of BMP (characters that should be encoded using two UTF-16 elements). For example, try to edit one of these characters: 𝄞 (U+1D11E) MUSICAL SYMBOL G CLEF 𝕥 (U+1D565) MATHEMATICAL DOUBLE-STRUCK SMALL T 𝟶 (U+1D7F6) MATHEMATICAL MONOSPACE DIGIT ZERO 𠂊 (U+2008A) Han Character You may miss some, depending on what fonts you have installed. These characters are all outside of the BMP (Basic Multilingual Plane). If you cannot see these characters, you can also try looking at them in the Unicode Character reference. For example, try to create file names in Windows that include these characters; try to delete these characters with a "backspace" to see how they behave in different applications that use UTF-16. I did some tests and the results are quite bad: Opera has problem with editing them (delete required 2 presses on backspace) Notepad can't deal with them correctly (delete required 2 presses on backspace) File names editing in Window dialogs in broken (delete required 2 presses on backspace) All QT3 applications can't deal with them - show two empty squares instead of one symbol. Python encodes such characters incorrectly when used directly u'X'!=unicode('X','utf-16') on some platforms when X in character outside of BMP. Python 2.5 unicodedata fails to get properties on such characters when python compiled with UTF-16 Unicode strings. StackOverflow seems to remove these characters from the text if edited directly in as Unicode characters (these characters are shown using HTML Unicode escapes). WinForms TextBox may generate invalid string when limited with MaxLength. It seems that such bugs are extremely easy to find in many applications that use UTF-16. So... Do you think that UTF-16 should be considered harmful?

    Read the article

  • Website Optimisation - The Impact of Blended Real-time Search So Far

    Despite the initial hype surrounding the introduction of blended real-time search into internet search engines, many experts have begun to question its value to website optimisation. Real time search has been widely criticised as a cause of SERP clutter, making pages appears chaotic and leaving the user struggling to decide which links may actually provide the information they are after.

    Read the article

  • Why Use !boolean_variable Over boolean_variable == false

    - by ell
    A comment on this question: Calling A Method that returns a boolean value inside a conditional statement says that you should use !boolean instead of boolean == false when testing conditions. Why? To me boolean == false is much more natural in English and is more explicit. I apologise if this is just a matter of style, but I was wondering if there was some other reason for this preference of !boolean?

    Read the article

  • Why are there no package management systems for C and C++?

    - by m0nhawk
    There are some programming languages for which exist their own package management systems: CTAN for TeX CPAN for Perl Pip & Eggs for Python Maven for Java cabal for Haskell Gems for Ruby Is there any other languages with such systems? What about C and C++? (that's the main question!) Why there are no such systems for them? And isn't creating packages for yum, apt-get or other general package management systems better?

    Read the article

  • SQL SERVER Size of Index Table for Each Index Solution 2

    Earlier I had ran puzzle where I asked question regarding size of index table for each index in database over here SQL SERVER Size of Index Table A Puzzle to Find Index Size for Each Index on Table. I had received good amount answers and I had blogged about that here SQL SERVER [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER – DELETE, TRUNCATE and RESEED Identity

    - by pinaldave
    Yesterday I had a headache answering questions to one of the DBA on the subject of Reseting Identity Values for All Tables. After talking to the DBA I realized that he has no clue about how the identity column behaves when there is DELETE, TRUNCATE or RESEED Identity is used. Let us run a small T-SQL Script. Create a temp table with Identity column beginning with value 11. The seed value is 11. USE [TempDB] GO -- Create Table CREATE TABLE [dbo].[TestTable]( [ID] [int] IDENTITY(11,1) NOT NULL, [var] [nchar](10) NULL ) ON [PRIMARY] GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO When seed value is 11 the next value which is inserted has the identity column value as 11. – Select Data SELECT * FROM [TestTable] GO Effect of DELETE statement -- Delete Data DELETE FROM [TestTable] GO When the DELETE statement is executed without WHERE clause it will delete all the rows. However, when a new record is inserted the identity value is increased from 11 to 12. It does not reset but keep on increasing. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] Effect of TRUNCATE statement -- Truncate table TRUNCATE TABLE [TestTable] GO When the TRUNCATE statement is executed it will remove all the rows. However, when a new record is inserted the identity value is increased from 11 (which is original value). TRUNCATE resets the identity value to the original seed value of the table. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO Effect of RESEED statement If you notice I am using the reseed value as 1. The original seed value when I created table is 11. However, I am reseeding it with value 1. -- Reseed DBCC CHECKIDENT ('TestTable', RESEED, 1) GO When we insert the one more value and check the value it will generate the new value as 2. This new value logic is Reseed Value + Interval Value – in this case it will be 1+1 = 2. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO Here is the clean up act. -- Clean up DROP TABLE [TestTable] GO Question for you: If I reseed value with some random number followed by the truncate command on the table what will be the seed value of the table. (Example, if original seed value is 11 and I reseed the value to 1. If I follow up with truncate table what will be the seed value now? Here is the complete script together. You can modify it and find the answer to the above question. Please leave a comment with your answer. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Why Is Vertical Resolution Monitor Resolution so Often a Multiple of 360?

    - by Jason Fitzpatrick
    Stare at a list of monitor resolutions long enough and you might notice a pattern: many of the vertical resolutions, especially those of gaming or multimedia displays, are multiples of 360 (720, 1080, 1440, etc.) But why exactly is this the case? Is it arbitrary or is there something more at work? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Trojandestroy recently noticed something about his display interface and needs answers: YouTube recently added 1440p functionality, and for the first time I realized that all (most?) vertical resolutions are multiples of 360. Is this just because the smallest common resolution is 480×360, and it’s convenient to use multiples? (Not doubting that multiples are convenient.) And/or was that the first viewable/conveniently sized resolution, so hardware (TVs, monitors, etc) grew with 360 in mind? Taking it further, why not have a square resolution? Or something else unusual? (Assuming it’s usual enough that it’s viewable). Is it merely a pleasing-the-eye situation? So why have the display be a multiple of 360? The Answer SuperUser contributor User26129 offers us not just an answer as to why the numerical pattern exists but a history of screen design in the process: Alright, there are a couple of questions and a lot of factors here. Resolutions are a really interesting field of psychooptics meeting marketing. First of all, why are the vertical resolutions on youtube multiples of 360. This is of course just arbitrary, there is no real reason this is the case. The reason is that resolution here is not the limiting factor for Youtube videos – bandwidth is. Youtube has to re-encode every video that is uploaded a couple of times, and tries to use as little re-encoding formats/bitrates/resolutions as possible to cover all the different use cases. For low-res mobile devices they have 360×240, for higher res mobile there’s 480p, and for the computer crowd there is 360p for 2xISDN/multiuser landlines, 720p for DSL and 1080p for higher speed internet. For a while there were some other codecs than h.264, but these are slowly being phased out with h.264 having essentially ‘won’ the format war and all computers being outfitted with hardware codecs for this. Now, there is some interesting psychooptics going on as well. As I said: resolution isn’t everything. 720p with really strong compression can and will look worse than 240p at a very high bitrate. But on the other side of the spectrum: throwing more bits at a certain resolution doesn’t magically make it better beyond some point. There is an optimum here, which of course depends on both resolution and codec. In general: the optimal bitrate is actually proportional to the resolution. So the next question is: what kind of resolution steps make sense? Apparently, people need about a 2x increase in resolution to really see (and prefer) a marked difference. Anything less than that and many people will simply not bother with the higher bitrates, they’d rather use their bandwidth for other stuff. This has been researched quite a long time ago and is the big reason why we went from 720×576 (415kpix) to 1280×720 (922kpix), and then again from 1280×720 to 1920×1080 (2MP). Stuff in between is not a viable optimization target. And again, 1440P is about 3.7MP, another ~2x increase over HD. You will see a difference there. 4K is the next step after that. Next up is that magical number of 360 vertical pixels. Actually, the magic number is 120 or 128. All resolutions are some kind of multiple of 120 pixels nowadays, back in the day they used to be multiples of 128. This is something that just grew out of LCD panel industry. LCD panels use what are called line drivers, little chips that sit on the sides of your LCD screen that control how bright each subpixel is. Because historically, for reasons I don’t really know for sure, probably memory constraints, these multiple-of-128 or multiple-of-120 resolutions already existed, the industry standard line drivers became drivers with 360 line outputs (1 per subpixel). If you would tear down your 1920×1080 screen, I would be putting money on there being 16 line drivers on the top/bottom and 9 on one of the sides. Oh hey, that’s 16:9. Guess how obvious that resolution choice was back when 16:9 was ‘invented’. Then there’s the issue of aspect ratio. This is really a completely different field of psychology, but it boils down to: historically, people have believed and measured that we have a sort of wide-screen view of the world. Naturally, people believed that the most natural representation of data on a screen would be in a wide-screen view, and this is where the great anamorphic revolution of the ’60s came from when films were shot in ever wider aspect ratios. Since then, this kind of knowledge has been refined and mostly debunked. Yes, we do have a wide-angle view, but the area where we can actually see sharply – the center of our vision – is fairly round. Slightly elliptical and squashed, but not really more than about 4:3 or 3:2. So for detailed viewing, for instance for reading text on a screen, you can utilize most of your detail vision by employing an almost-square screen, a bit like the screens up to the mid-2000s. However, again this is not how marketing took it. Computers in ye olden days were used mostly for productivity and detailed work, but as they commoditized and as the computer as media consumption device evolved, people didn’t necessarily use their computer for work most of the time. They used it to watch media content: movies, television series and photos. And for that kind of viewing, you get the most ‘immersion factor’ if the screen fills as much of your vision (including your peripheral vision) as possible. Which means widescreen. But there’s more marketing still. When detail work was still an important factor, people cared about resolution. As many pixels as possible on the screen. SGI was selling almost-4K CRTs! The most optimal way to get the maximum amount of pixels out of a glass substrate is to cut it as square as possible. 1:1 or 4:3 screens have the most pixels per diagonal inch. But with displays becoming more consumery, inch-size became more important, not amount of pixels. And this is a completely different optimization target. To get the most diagonal inches out of a substrate, you want to make the screen as wide as possible. First we got 16:10, then 16:9 and there have been moderately successful panel manufacturers making 22:9 and 2:1 screens (like Philips). Even though pixel density and absolute resolution went down for a couple of years, inch-sizes went up and that’s what sold. Why buy a 19″ 1280×1024 when you can buy a 21″ 1366×768? Eh… I think that about covers all the major aspects here. There’s more of course; bandwidth limits of HDMI, DVI, DP and of course VGA played a role, and if you go back to the pre-2000s, graphics memory, in-computer bandwdith and simply the limits of commercially available RAMDACs played an important role. But for today’s considerations, this is about all you need to know. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Can Google Employees See My Saved Google Chrome Passwords?

    - by Jason Fitzpatrick
    Storing your passwords in your web browser seems like a great time saver, but are the passwords secure and inaccessible to others (even employees of the browser company) when squirreled away? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader MMA is curious if Google employees have (or could have) access to the passwords he stores in Google Chrome: I understand that we are really tempted to save our passwords in Google Chrome. The likely benefit is two fold, You don’t need to (memorize and) input those long and cryptic passwords. These are available wherever you are once you log in to your Google account. The last point sparked my doubt. Since the password is available anywhere, the storage must in some central location, and this should be at Google. Now, my simple question is, can a Google employee see my passwords? Searching over the Internet revealed several articles/messages. Do you save passwords in Chrome? Maybe you should reconsider: Talks about your passwords being stolen by someone who has access to your computer account. Nothing mentioned about the central storage security and vulnerability. There is even a response from Chrome browser security tech lead about the first issue. Chrome’s insane password security strategy: Mostly along the same line. You can steal password from somebody if you have access to the computer account. How to Steal Passwords Saved in Google Chrome in 5 Simple Steps: Teaches you how to actually perform the act mentioned in the previous two when you have access to somebody else’s account. There are many more (including this one at this site), mostly along the same line, points, counter-points, huge debates. I refrain from mentioning them here, simply carry a search if you want to find them. Coming back to my original query, can a Google employee see my password? Since I can view the password using a simple button, definitely they can be unhashed (decrypted) even if encrypted. This is very different from the passwords saved in Unix-like OS’s where the saved password can never be seen in plain text. They use a one-way encryption algorithm to encrypt your passwords. This encrypted password is then stored in the passwd or shadow file. When you attempt to login, the password you type in is encrypted again and compared with the entry in the file that stores your passwords. If they match, it must be the same password, and you are allowed access. Thus, a superuser can change my password, can block my account, but he can never see my password. So are his concerns well founded or will a little insight dispel his worry? The Answer SuperUser contributor Zeel helps put his mind at ease: Short answer: No* Passwords stored on your local machine can be decrypted by Chrome, as long as your OS user account is logged in. And then you can view those in plain text. At first this seems horrible, but how did you think auto-fill worked? When that password field gets filled in, Chrome must insert the real password into the HTML form element – or else the page wouldn’t work right, and you could not submit the form. And if the connection to the website is not over HTTPS, the plain text is then sent over the internet. In other words, if chrome can’t get the plain text passwords, then they are totally useless. A one way hash is no good, because we need to use them. Now the passwords are in fact encrypted, the only way to get them back to plain text is to have the decryption key. That key is your Google password, or a secondary key you can set up. When you sign into Chrome and sync the Google servers will transmit the encrypted passwords, settings, bookmarks, auto-fill, etc, to your local machine. Here Chrome will decrypt the information and be able to use it. On Google’s end all that info is stored in its encrpyted state, and they do not have the key to decrypt it. Your account password is checked against a hash to log in to Google, and even if you let chrome remember it, that encrypted version is hidden in the same bundle as the other passwords, impossible to access. So an employee could probably grab a dump of the encrypted data, but it wouldn’t do them any good, since they would have no way to use it.* So no, Google employees can not** access your passwords, since they are encrypted on their servers. * However, do not forget that any system that can be accessed by an authorized user can be accessed by an unauthorized user. Some systems are easier to break than other, but none are fail-proof. . . That being said, I think I will trust Google and the millions they spend on security systems, over any other password storage solution. And heck, I’m a wimpy nerd, it would be easier to beat the passwords out of me than break Google’s encryption. ** I am also assuming that there isn’t a person who just happens to work for Google gaining access to your local machine. In that case you are screwed, but employment at Google isn’t actually a factor any more. Moral: Hit Win + L before leaving machine. While we agree with zeel that it’s a pretty safe bet (as long as your computer is not compromised) that your passwords are in fact safe while stored in Chrome, we prefer to encrypt all our logins and passwords in a LastPass vault. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Does Hard Drive Orientation Affect Its Lifespan?

    - by Jason Fitzpatrick
    Many cases allow you to mount drives in vertical or horizontal configurations and external drives can be easily repositioned. Does the orientation of the hard drive affect the performance and longevity of the drive? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-drive grouping of Q&A web sites. 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8

    Read the article

  • Wireless not working on Dell Inspirion 1501 after upgrading to Ubuntu 12.04 tried steps in other threads

    - by mark burton
    I updated to Ubuntu 12.04 and now my wireless is not working. No icon for it. Tried some of the troubleshooting in other threads but can't get it to work. Would really appreciate any help Thanks! " *-network description: Network controller product: BCM4311 802.11a/b/g vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:05:00.0 version: 01 width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=b43-pci-bridge latency=0 resources: irq:18 memory:c0200000-c0203fff *-network description: Ethernet interface product: BCM4401-B0 100Base-TX vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:08:00.0 logical name: eth0 version: 02 serial: 00:19:b9:5c:d1:52 size: 100Mbit/s capacity: 100Mbit/s width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=b44 driverversion=2.0 duplex=full ip=10.0.0.6 latency=64 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:21 memory:c0300000-c0301fff " lsub results Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 002: ID 046d:c526 Logitech, Inc. Nano Receiver $ lspci 00:00.0 Host bridge: Advanced Micro Devices [AMD] nee ATI RS480 Host Bridge (rev 10) 00:01.0 PCI bridge: Advanced Micro Devices [AMD] nee ATI RS480 PCI Bridge 00:05.0 PCI bridge: Advanced Micro Devices [AMD] nee ATI RS480 PCI Bridge 00:06.0 PCI bridge: Advanced Micro Devices [AMD] nee ATI RS480 PCI Bridge 00:12.0 SATA controller: Advanced Micro Devices [AMD] nee ATI SB600 Non-Raid-5 SATA 00:13.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB600 USB (OHCI0) 00:13.1 USB controller: Advanced Micro Devices [AMD] nee ATI SB600 USB (OHCI1) 00:13.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB600 USB (OHCI2) 00:13.3 USB controller: Advanced Micro Devices [AMD] nee ATI SB600 USB (OHCI3) 00:13.4 USB controller: Advanced Micro Devices [AMD] nee ATI SB600 USB (OHCI4) 00:13.5 USB controller: Advanced Micro Devices [AMD] nee ATI SB600 USB Controller (EHCI) 00:14.0 SMBus: Advanced Micro Devices [AMD] nee ATI SBx00 SMBus Controller (rev 13) 00:14.1 IDE interface: Advanced Micro Devices [AMD] nee ATI SB600 IDE 00:14.2 Audio device: Advanced Micro Devices [AMD] nee ATI SBx00 Azalia (Intel HDA) 00:14.3 ISA bridge: Advanced Micro Devices [AMD] nee ATI SB600 PCI to LPC Bridge 00:14.4 PCI bridge: Advanced Micro Devices [AMD] nee ATI SBx00 PCI to PCI Bridge 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 01:05.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RS482 [Radeon Xpress 200M] 05:00.0 Network controller: Broadcom Corporation BCM4311 802.11a/b/g (rev 01) 08:00.0 Ethernet controller: Broadcom Corporation BCM4401-B0 100Base-TX (rev 02) 08:01.0 SD Host controller: Ricoh Co Ltd R5C822 SD/SDIO/MMC/MS/MSPro Host Adapter (rev 19) 08:01.1 System peripheral: Ricoh Co Ltd R5C843 MMC Host Controller (rev 01) rfkill list all 0: dell-wifi: Wireless LAN Soft blocked: no Hard blocked: no

    Read the article

  • Calculating 2D (screen) coordinates from 3D positions in XNA 4.0

    - by NDraskovic
    I have a program that draws some items to the scene by loading their positions from a file. Now I want to place a Ray on the same location where the items are drawn. So my question is how can I calculate the position of the ray (it's 2D components) by using 3D coordinates of each particular item? The items don't move anywhere, so once they are placed they stay until the end of the programs execution. Thanks.

    Read the article

  • Basis of definitions

    - by Yttrill
    Let us suppose we have a set of functions which characterise something: in the OO world methods characterising a type. In mathematics these are propositions and we have two kinds: axioms and lemmas. Axioms are assumptions, lemmas are easily derived from them. In C++ axioms are pure virtual functions. Here's the problem: there's more than one way to axiomatise a system. Given a set of propositions or methods, a subset of the propositions which is necessary and sufficient to derive all the others is called a basis. So too, for methods or functions, we have a desired set which must be defined, and typically every one has one or more definitions in terms of the others, and we require the programmer to provide instance definitions which are sufficient to allow all the others to be defined, and, if there is an overspecification, then it is consistent. Let me give an example (in Felix, Haskell code would be similar): class Eq[t] { virtual fun ==(x:t,y:t):bool => eq(x,y); virtual fun eq(x:t, y:t)=> x == y; virtual fun != (x:t,y:t):bool => not (x == y); axiom reflex(x:t): x == x; axiom sym(x:t, y:t): (x == y) == (y == x); axiom trans(x:t, y:t, z:t): implies(x == y and y == z, x == z); } Here it is clear: the programmer must define either == or eq or both. If both are defined, the definitions must be equivalent. Failing to define one doesn't cause a compiler error, it causes an infinite loop at run time. Defining both inequivalently doesn't cause an error either, it is just inconsistent. Note the axioms specified constrain the semantics of any definition. Given a definition of == either directly or via a definition of eq, then != is defined automatically, although the programmer might replace the default with something more efficient, clearly such an overspecification has to be consistent. Please note, == could also be defined in terms of !=, but we didn't do that. A characterisation of a partial or total order is more complex. It is much more demanding since there is a combinatorial explosion of possible bases. There is an reason to desire overspecification: performance. There also another reason: choice and convenience. So here, there are several questions: one is how to check semantics are obeyed and I am not looking for an answer here (way too hard!). The other question is: How can we specify, and check, that an instance provides at least a basis? And a much harder question: how can we provide several default definitions which depend on the basis chosen?

    Read the article

  • What can a Service do on Windows?

    - by Akemi Iwaya
    If you open up Task Manager or Process Explorer on your system, you will see many services running. But how much of an impact can a service have on your system, especially if it is ‘corrupted’ by malware? Today’s SuperUser Q&A post has the answers to a curious reader’s questions. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Forivin wants to know how much impact a service can have on a Windows system, especially if it is ‘corrupted’ by malware: What kind malware/spyware could someone put into a service that does not have its own process on Windows? I mean services that use svchost.exe for example, like this: Could a service spy on my keyboard input? Take screenshots? Send and/or receive data over the internet? Infect other processes or files? Delete files? Kill processes? How much impact could a service have on a Windows installation? Are there any limits to what a malware ‘corrupted’ service could do? The Answer SuperUser contributor Keltari has the answer for us: What is a service? A service is an application, no more, no less. The advantage is that a service can run without a user session. This allows things like databases, backups, the ability to login, etc. to run when needed and without a user logged in. What is svchost? According to Microsoft: “svchost.exe is a generic host process name for services that run from dynamic-link libraries”. Could we have that in English please? Some time ago, Microsoft started moving all of the functionality from internal Windows services into .dll files instead of .exe files. From a programming perspective, this makes more sense for reusability…but the problem is that you can not launch a .dll file directly from Windows, it has to be loaded up from a running executable (exe). Thus the svchost.exe process was born. So, essentially a service which uses svchost is just calling a .dll and can do pretty much anything with the right credentials and/or permissions. If I remember correctly, there are viruses and other malware that do hide behind the svchost process, or name the executable svchost.exe to avoid detection. Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

    Read the article

  • SQL SERVER Understanding ALTER INDEX ALL REBUILD with Disabled Clustered Index

    This blog is in response to the ongoing communication with the reader who had earlier asked the question of SQL SERVER Disable Clustered Index and Data Insert. The same reader has asked me the difference between ALTER INDEX ALL REBUILD and ALTER INDEX REBUILD along with disabled clustered index. Instead of writing a big [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL Server Manageability Series: how to change the default path of .cache files of a data collector? #sql #mdw #dba

    - by ssqa.net
    How to change the default path of .cache files of a data collector after the Management Data Warehouse (MDW has been setup? This was the question asked by one of the DBAs in a client's place, instantly I enquired that were there any folder specified while setting up the MDW and obvious answer was no as there were left default. This means all the .CACHE files are stored under %C\TEMP directory which may post out of disk space problem on the server where the MDW is setup to collect. Going back...(read more)

    Read the article

  • How to uninstall Latex?

    - by ATMathew
    Ubuntu and Latex newbie here. Very basic question I have. I recently installed Latex using the following code: sudo apt-get install texlive However, I want to uninstall this and just get the latex base for now. I tried the following but it doesn't work: sudo apt-get remove texlive dpkg --get-selections | grep tex shows that a number of latex related files are installed. How do I remove texlive?

    Read the article

  • Is it possible to use Python with the Ubuntu SDK?

    - by Zignd
    David Planella wrote in his answer to a question I posted that: ...the recommended way to develop apps for Ubuntu is the Ubuntu SDK. So I installed it, but looks like the supported programming language is C++. Does it mean I will need to know C++ to develop a new application for Ubuntu? Is C++ the recommended programming language for Ubuntu application now? What about Python, I started learning it hoping to develop applications for Ubuntu.

    Read the article

  • How do I install root-cern?

    - by Rick Turd
    Cern's ROOT environment installs fine on 10.10, and is even available in the Software Center, but it will not install or compile on Natty because the x11 directories are in new locations. I can redefine the paths, but then I get compile errors because only gcc 4.3 is supported through ROOT and Natty uses 4.5. I've given up and gone back to 10.10, but my question is, after installing ROOT on Maverick, if I upgrade back to Natty, will my old software still work with these new directory paths?

    Read the article

  • What Does My BIOS Do After Booting?

    - by Jason Fitzpatrick
    Once your computer finishes the boot process and you’re firmly inside the operating system buzzing along, is there anything left for the BIOS to do? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-drive grouping of Q&A web sites. How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

< Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >