Search Results

Search found 21436 results on 858 pages for 'draw order'.

Page 447/858 | < Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >

  • Creating Persistent Drive Labels With UDEV Using /dev/disk/by-path

    - by Matt
    I have a new BackBlaze Pod (BackBlaze Pod 2.0). It has 45 3TB drives and they when I first set it up they were labeled /dev/sda through /dev/sdz and /dev/sdaa through /dev/sdas. I used mdadm to setup three really big 15 drive RAID6 arrays. However, since first setup a few weeks ago I had a couple of the hard drives fail on me. I've replaced them but now the arrays are complaining because they can't find the missing drives. When I list the the disks... ls -l /dev/sd* I see that /dev/sda /dev/sdf /dev/sdk /dev/sdp no longer appear and now there are 4 new ones... /dev/sdau /dev/sdav /dev/sdaw /dev/sdax I also just found that I can do this... ls -l /dev/disk/by-path/ total 0 lrwxrwxrwx 1 root root 10 Sep 19 18:08 pci-0000:02:04.0-scsi-0:0:0:0 -> ../../sdau lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-0:1:0:0 -> ../../sdb lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-0:2:0:0 -> ../../sdc lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-0:3:0:0 -> ../../sdd lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-0:4:0:0 -> ../../sde lrwxrwxrwx 1 root root 10 Sep 19 18:08 pci-0000:02:04.0-scsi-2:0:0:0 -> ../../sdae lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-2:1:0:0 -> ../../sdg lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-2:2:0:0 -> ../../sdh lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-2:3:0:0 -> ../../sdi lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-2:4:0:0 -> ../../sdj lrwxrwxrwx 1 root root 10 Sep 19 18:08 pci-0000:02:04.0-scsi-3:0:0:0 -> ../../sdav lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-3:1:0:0 -> ../../sdl lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-3:2:0:0 -> ../../sdm lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-3:3:0:0 -> ../../sdn lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-3:4:0:0 -> ../../sdo lrwxrwxrwx 1 root root 10 Sep 19 18:08 pci-0000:04:04.0-scsi-0:0:0:0 -> ../../sdax lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-0:1:0:0 -> ../../sdq lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-0:2:0:0 -> ../../sdr lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-0:3:0:0 -> ../../sds lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-0:4:0:0 -> ../../sdt lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-2:0:0:0 -> ../../sdu lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-2:1:0:0 -> ../../sdv lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-2:2:0:0 -> ../../sdw lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-2:3:0:0 -> ../../sdx lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-2:4:0:0 -> ../../sdy lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-3:0:0:0 -> ../../sdz I didn't list them all....you can see the problem above. They're sorted by scsi id here but sda is missing...replaced by sdau...etc... So obviously the arrays are complaining. Is it possible to get Linux to reread the drive labels in the correct order or am I screwed? My initial design with 15 drive arrays is not ideal. With 3TB drives the rebuild times were taking 3 or 4 days....maybe more. I'm scrapping the whole design and I think I am going to go with 6 x 7 RAID5 disk arrays and 3 hot spares to make the arrays a bit easier to manage and shorten the rebuild times. But I'd like to clean up the drive labels so they aren't out of order. I haven't figured out how to do this yet. Does anyone know how to get this straightened out? Thanks, Matt

    Read the article

  • Getting back the old alt-tab windows switching behavior in Windows 7?

    - by Carlos A. Ibarra
    When you run more than 6 applications on Windows 7 and you press alt-TAB, icons representing the first 6 applications and the desktop appear on the first row of the grid and you can cycle with alt-TAB-TAB... through the 6 most recently used windows the usual way, but the 7th and other less recently used windows don't follow the same rules. Instead they get grouped together according to their application but disregarding whether they were recently used or not. This new behavior is mentioned here. I am very used to the old way of cycling and the new system is driving me crazy. I tend to have 20 or so windows open at one time and I frequently need to alt-tab to the 7th or 8th window on the stack but it doesn't work the same anymore. Does anyone know how to put back the old behavior, so that alt-tab-tab-tab... goes through the whole list in most-recent to least-recent order?

    Read the article

  • Drawing transparent glyphs on the HTML canvas

    - by Bertrand Le Roy
    The HTML canvas has a set of methods, createImageData and putImageData, that look like they will enable you to draw transparent shapes pixel by pixel. The data structures that you manipulate with these methods are pseudo-arrays of pixels, with four bytes per pixel. One byte for red, one for green, one for blue and one for alpha. This alpha byte makes one believe that you are going to be able to manage transparency, but that’s a lie. Here is a little script that attempts to overlay a simple generated pattern on top of a uniform background: var wrong = document.getElementById("wrong").getContext("2d"); wrong.fillStyle = "#ffd42a"; wrong.fillRect(0, 0, 64, 64); var overlay = wrong.createImageData(32, 32), data = overlay.data; fill(data); wrong.putImageData(overlay, 16, 16); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } where the fill method is setting the pixels in the lower-left half of the overlay to opaque red, and the rest to transparent black. And here’s how it renders: As you can see, the transparency byte was completely ignored. Or was it? in fact, what happens is more subtle. What happens is that the pixels from the image data, including their alpha byte, replaced the existing pixels of the canvas. So the alpha byte is not lost, it’s just that it wasn’t used by putImageData to combine the new pixels with the existing ones. This is in fact a clue to how to write a putImageData that works: we can first dump that image data into an intermediary canvas, and then compose that temporary canvas onto our main canvas. The method that we can use for this composition is drawImage, which works not only with image objects, but also with canvas objects. var right = document.getElementById("right").getContext("2d"); right.fillStyle = "#ffd42a"; right.fillRect(0, 0, 64, 64); var overlay = wrong.createImageData(32, 32), data = overlay.data; fill(data); var overlayCanvas = document.createElement("canvas"); overlayCanvas.width = overlayCanvas.height = 32; overlayCanvas.getContext("2d").putImageData(overlay, 0, 0); right.drawImage(overlayCanvas, 16, 16); And there is is, a version of putImageData that works like it should always have:

    Read the article

  • Per-machine decentralised DNS caching - nscd/lwresd/etc

    - by Dan Carley
    Preface: We have caching resolvers at each of our geographic network locations. These are clustered for resiliency and their locality reduces the latency of internal requests generated by our servers. This works well. Except that a vast quantity of the requests seen over the wire are lookups for the same records, generated by applications which don't perform any DNS caching of their own. Questions: Is there a significant benefit to running lightweight caching daemons on the individual servers in order to reduce repeated requests from hitting the network? Does anyone have experience of using [u]nscd, lwresd or dnscache to do such a thing? Are there any other packages worth looking at? Any caveats to beware of? Besides the obvious, caching and negative caching stale results.

    Read the article

  • WebCenter Customer Advisory Board meetings kick off Oracle Open World 2012!

    - by Lance711
    Welcome to OpenWorld! OpenWorld 2012 got underway today with a series of meetings with the members of the WebCenter Customer Advisory Board. Led by the WebCenter Product Management team, these meetings are a great way for the product team and customers to directly interact and discuss real-life business challenges, product details and to discuss upcoming features and functionality. This year, board members participated in discussions around live demos around product enhancements that will be featured throughout the coming week. Highlights included a variety of new mobile and social solutions, a great new user interface for WebCenter Content plus new Portal and Sites functionality that makes the experience for the everyday user a lot more pleasant. The day kicked off with Roel Stalman, VP of Product Management, giving a detailed overview of what’s new in WebCenter. Given all the improvements to discuss, this session went over 2 hours! Roel showcased the brand new UI for Content, Portal and Sites. He also gave live demos of the new mobile apps for WebCenter Content, Portal and the Oracle Social Network.  The attendees then broke into sub-groups in order to deep-dive with Product Management for the Portal, Sites, and Content product areas on specific functionality and application integrations. If you are here in San Francisco this week for OpenWorld, I definitely recommend stopping by the WebCenter area in the Moscone West Exhibition Hall to see some of this new functionality for yourself. And be sure to check out the WebCenter sessions throughout the week as those give us a chance to discuss direction and strategy, answer your questions and get your feedback and ideas. For those of you could not make it to OpenWorld this year, we miss you! You can stay in touch with what is happening via this blog and by following #oow and #webcenter on Twitter. Additionally, we will be rolling out details on upcoming products and release info over the coming months via this blog and web seminars. Stay tuned!

    Read the article

  • Assigning resources to MS Project 2007

    - by adam
    Hi, I'm planning a redesign of a site in Project 2007. I have three developers to hand, all with the same skills. There are about 80 templates to be rendered as part of the redesign, and each template has been added as a project task. Each of these tasks can be done by any of the 3 devs, and each will take a day (with a few exceptions). There is no order in which the tasks must be completed, so there are no predecessor rules. I'd like to be able to assign tasks to a 'Developer' resource group, and for Project to see that three tasks can be done at once (as the group has three resources members) and queue the tasks as such. Googling leads me to Team Assignment, but that appears to be part of Project Server. Surely I can do this in standalone Project? Thanks, Adam

    Read the article

  • Make cloudera-vm work on Oracle VM VirtualBox

    - by ????? ????????
    I downloaded this and the instructions say: Important: You must enable the I/O APIC in order to use 64-bit mode. (See http://www.virtualbox.org/manual/ch03.html.) On newer versions of VirtualBox, it may default to using SATA as the disk interface. This can cause a kernel panic in the VM. Switching to the IDE driver solves this problem. I am running this on Red Hat 64-bit mode (I've also tried on Ubuntu 64-bit with the same result). I pointed to the cloudera-vm image as a startup disk for the VM. I am getting this message: Failed to open a session for the virtual machine ClouderaDevelopment. VT-x features locked or unavailable in MSR. (VERR_VMX_MSR_LOCKED_OR_DISABLED). Result Code: E_FAIL (0x80004005) Component: Console Interface: IConsole {1968b7d3-e3bf-4ceb-99e0-cb7c913317bb} Does anyone know what I am doing wrong?

    Read the article

  • Desktop Fun: Need for Speed Wallpaper Collection

    - by Asian Angel
    Are you a passionate fan of the Need for Speed series or racing games in general? Then start your engines, turn up the radio, and get ready to race with our Need for Speed Wallpaper collection. Note: Click on the picture to see the full-size image—these wallpapers vary in size so you may need to crop, stretch, or place them on a colored background in order to best match them to your screen’s resolution. Note: At 6236*2268 pixels this last wallpaper will need to be decreased in size before being placed on an appropriately sized white background matching your monitor’s resolution. For more wallpapers be certain to see our great collections in the Desktop Fun section. Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • is there a way to group desktop icons on the task bar

    - by Memor-X
    i have a folder on the desktop which has a bunch of programs i use frequently, i can't pin all these programs to the taskbar themselves as there are too many for the screen width that it'll just make the taskbar scrollable i am wondering if i can do one of the following pin the icons to the taskbar under 1 icon pin the folder to the taskbar separately to the Windows Explorer button which when there are no folders open will open up the libraries and if there are folders it'll show me the folders open, this way if i have 5 folders open and my frequently used programs folder i can just click on the frequently used programs folder icon on the taskbar and be given that folder only i'm trying to reduce the number of clicks, scrolling or scanning across the task bar i need to do in order to find a program

    Read the article

  • HP dv9000 Vista laptop won't boot from CD/DVD drive

    - by scottedwards2000
    My HP dv9000 Vista laptop recently got the BSOD with error 0x0000c1f5. The only way to fix this error is to be able to boot from CD/DVD and use some repair software I have. The problem is that the laptop REFUSES to boot from any CD/DVD I try. I've changed the boot order so the CD/DVD is first, and I can hear the drive spin up a bit upon power-up, but after a second, it spins down and then the laptop tries to boot from hard drive. Any ideas? (I've tried lots of CDs so it's not the media itself) Thanks much!

    Read the article

  • Why a static main method in Java and C#, rather than a constructor?

    - by Konrad Rudolph
    Why did (notably) Java and C# decide to have a static method as their entry point – rather than representing an application instance by an instance of an Application class, with the entry point being an appropriate constructor which, at least to me, seems more natural? I’m interested in a definitive answer from a primary or secondary source, not mere speculations. This has been asked before. Unfortunately, the existing answers are merely begging the question. In particular, the following answers don’t satisfy me, as I deem them incorrect: There would be ambiguity if the constructor were overloaded. – In fact, C# (as well as C and C++) allows different signatures for Main so the same potential ambiguity exists, and is dealt with. A static method means no objects can be instantiated before so order of initialisation is clear. – This is just factually wrong, some objects are instantiated before (e.g. in a static constructor). So they can be invoked by the runtime without having to instantiate a parent object. – This is no answer at all. Just to justify further why I think this is a valid and interesting question: Many frameworks do use classes to represent applications, and constructors as entry points. For instance, the VB.NET application framework uses a dedicated main dialog (and its constructor) as the entry point1. Neither Java nor C# technically need a main method. Well, C# needs one to compile, but Java not even that. And in neither case is it needed for execution. So this doesn’t appear to be a technical restriction. And, as I mentioned in the first paragraph, for a mere convention it seems oddly unfitting with the general design principle of Java and C#. To be clear, there isn’t a specific disadvantage to having a static main method, it’s just distinctly odd, which made me wonder if there was some technical rationale behind it. I’m interested in a definitive answer from a primary or secondary source, not mere speculations. 1 Although there is a callback (Startup) which may intercept this.

    Read the article

  • Required Parameters [SSIS Denali]

    - by jamiet
    SQL Server Integration Services (SSIS) in its 2005 and 2008 incarnations expects you to set a property values within your package at runtime using Configurations. SSIS developers tend to have rather a lot of issues with SSIS configurations; in this blog post I am going to highlight one of those problems and how it has been alleviated in SQL Server code-named Denali.   A configuration is a property path/value pair that exists outside of a package, typically within SQL Server or in a collection of one or more configurations in a file called a .dtsConfig file. Within the package one defines a pointer to a configuration that says to the package “When you execute, go and get a configuration value from this location” and if all goes well the package will fetch that configuration value as it starts to execute and you will see something like the following in your output log: Information: 0x40016041 at Package: The package is attempting to configure from the XML file "C:\Configs\MyConfig.dtsConfig". Unfortunately things DON’T always go well, perhaps the .dtsConfig file is unreachable or the name of the SQL Sever holding the configuration value has been defined incorrectly – any one of a number of things can go wrong. In this circumstance you might see something like the following in your log output instead: Warning: 0x80012014 at Package: The configuration file "C:\Configs\MyConfig.dtsConfig" cannot be found. Check the directory and file name. The problem that I want to draw attention to here though is that your package will ignore the fact it can’t find the configuration and executes anyway. This is really really bad because the package will not be doing what it is supposed to do and worse, if you have not isolated your environments you might not even know about it. Can you imagine a package executing for months and all the while inserting data into the wrong server? Sounds ridiculous but I have absolutely seen this happen and the root cause was that no-one picked up on configuration warnings like the one above. Happily in SSIS code-named Denali this problem has gone away as configurations have been replaced with parameters. Each parameter has a property called ‘Required’: Any parameter with Required=True must have a value passed to it when the package executes. Any attempt to execute the package will result in an error. Here we see that error when attempting to execute using the SSMS UI: and similarly when executing using T-SQL: Error is: Msg 27184, Level 16, State 1, Procedure prepare_execution, Line 112 In order to execute this package, you need to specify values for the required parameters.   As you can see, SSIS code-named Denali has mechanisms built-in to prevent the problem I described at the top of this blog post. Specifying a Parameter required means that any packages in that project cannot execute until a value for the parameter has been supplied. This is a very good thing. I am loathe to make recommendations so early in the development cycle but right now I’m thinking that all Project Parameters should have Required=True, certainly any that are used to define external locations should be anyway. @Jamiet

    Read the article

  • What is the correct term for - server/client database sync via API?

    - by Daniel
    Forgive the vague question title. I've been programming mobile apps for 3 years now, and I've got a little too far from the web services and server side code then I probably should have. Anyway, I'm doing a personal project now and I want to create an web API for it. One of my requirements is to check for updates from my app, so I would send a timestamp to the API. I've used many APIs that my clients prepared for me and only now am I appreciating their work ! What is the term or technique used to create an API backed by a database which tracks changes via dates/timestamps, basically an effective way for me to query changes occurring since a timestamp. Simply put, I want that my app can call my API in order to sync new data and changed data from the server, to the app. The app would only have a timestamp of the last time it synced with the server. Would I have a log table for each data table in my database which adds a record for each change? Then I could query all changes with a timestamp superior to the one passed to the API. Can anyone point me in the right direction on this?

    Read the article

  • Windows 7 complaint

    - by Chris Williams
    Let me start by saying that I love Windows 7. I think it's the best OS that Microsoft has put out in ages, possibly ever. However, I do have one little complaint. Actually it's not that little, it's become a real pain in the butt for me. I'm talking about Forced Updates. Yes, I know it's always been a problem and that Windows would occasionally force a reboot while you were away, in order to install some important update. That's not quite what I'm referring to. I mean the new "feature" where you don't have the choice to skip updates when shutting down. This isn't a big deal to those of you with desktop machines, but for those of us with laptops, it is rapidly becoming an unforgivable pain in the ass. Let me see if I can make myself a little clearer... If I am shutting down my LAPTOP, 99% of the time it's because I need to get up and go. Not wait around for FORCED UPDATES!! I travel a lot, and there are few things more annoying than shutting down to head to the airport, or shutting down so I can board my flight, or shutting down because we're about to land, etc... and having to wait 5-10 minutes while Win 7 does it's thing. It's damn inconvenient. There has to be a way you can detect if I'm on a laptop and give me the option to postpone updates, or skip them or (here's a thought) run them on startup instead of on shutdown. I'm usually not in a hurry when my machine is booting up, but if I'm powering down it's because I'm ready to GO! Please fix this. Windows 7 rocks in almost every other way I can think of.

    Read the article

  • Multiple PHP versions running as cgi

    - by Pierre
    I'm trying to install a second version of PHP, to run alongside the current version of php. I've compiled the latest php source from github (5.5-DEV), and I'm trying to run it as CGI. Here is my virtual host config: <VirtualHost *:8055> DocumentRoot /Library/WebServer/Documents/ ScriptAlias /cgi-bin/ /usr/local/php55/cgi Action php55-cgi /cgi-bin/php-cgi AddHandler php55-cgi .php <Directory /Library/WebServer/Documents/> Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order Allow,Deny Allow from all </Directory> DirectoryIndex index.html index.php </VirtualHost> But when I go to http://127.0.0.1:8055/info.php, I get the following error: Forbidden You don't have permission to access /cgi-bin/php-cgi/info.php on this server Edit I'm now switching between LoadModule php5_module /usr/local/php54/libphp5.so and LoadModule php5_module /usr/local/php55/libphp5.so It works for now, but is not ideal. I would like to have the different versions of php on different virtual hosts

    Read the article

  • Can I use a Retail DVD media with an OEM key to install Windows Vista?

    - by Sammy
    I got a Fujitsu computer with OEM license key and Windows Vista. I would like to reinstall Windows on it. But I didn't get any Windows media with it. However, I do poses more than just one DVD installation disc from my Retail copies of Windows Vista that I use on other computers. Can I use this media instead? Or do I have to order a specialty OEM DVD media from the manufacturer or Microsoft? Update: I have found some partition called "EISA" configuration partition. It is a hidden partition that I found in Disk Management. How can I make use of this? Do I boot from it or do I mount it to a drive letter and access it inside Windows? Can this be used to restore the computer? It is about 11 GB in size.

    Read the article

  • ArchBeat Link-o-Rama for December 13, 2012

    - by Bob Rhubart
    Key Takeaway Points and Lessons Learned from QCon San Francisco 2012 | Abel Avram Abel Avram's InfoQ article "summarizes the key takeaways from QConSF 2012, including blog entries written by editors and practitioner attendees for all keynotes, tracks and sessions along with aggregated twitter feedback during the event." Pick Bex's Deep Dive Talk for Collaborate 2013 | Bex Huff Bezzotech, Oracle ACE Director Bex Huff's outfit, is presenting a two-hour deep-dive session on ECM at Collaborate 13 in Denver in April. You can help to determine the focus of that session by submitting your ideas directly to Bex. Get the details in his blog post. E2.0 Workbench Podcast 10 – EBS Order Entry with Webcenter via BPEL and SOA Gateway | John Brunswick John Brunswick's latest E2.0 Workbench video tutorial illustrates how to "create a custom service, create a BPEL process that interacts with it and brokers authentication to the SOA Gateway, and finally consume the BPEL service in WebCenter to allow end users to place simple orders via an extranet. Oracle Fusion Middleware Security: Password Policy in OAM 11g R2 | Rob Otto Rob Otto continues the Oracle Fusion Middleware A-Team "Oracle Access Manager Academy" series with a detailed look at OAM's ability to support "a subset of password management processes without the need to use Oracle Identity Manager and LDAP Sync." Thought for the Day "Smart data structures and dumb code works a lot better than the other way around." — Eric Raymond Source: SoftwareQuotes.com

    Read the article

  • china and gmail attacks

    - by doug
    "We have evidence to suggest that a primary goal of the attackers was accessing the Gmail accounts of Chinese human rights activists. Based on our investigation to date we believe their attack did not achieve that objective. Only two Gmail accounts appear to have been accessed, and that activity was limited to account information (such as the date the account was created) and subject line, rather than the content of emails themselves.” [source] I don't know much about how internet works, but as long the chines gov has access to the chines internet providers servers, why do they need to hack gmail accounts? I assume that i don't understand how submitting/writing a message(from user to gmail servers) works, in order to be sent later to the other email address. Who can tell me how submitting a message to a web form works?

    Read the article

  • LinkedIn Woopsie with the Outlook 2010 Social Media Connector

    - by Martin Hinshelwood
    I have always used the LinkedIn toolbar for Outlook to sort out, upload and sync my contacts. Because of this I have over 2000 contacts in my contacts list that I sync with my phone, Plaxo, live, Google and others. I got a surprise the other day when my LinkedIn account was suspended and I was unable to login.   Figure: Bad, account suspended   So I contacted LinkedIn customer services to find out what the problem is, and here is the response: Dear Martin, We have recently noticed a large number of page searches and profile views through your LinkedIn account. We are aware that you may be using an automated or manual process to systematically view LinkedIn web pages. The information within LinkedIn is provided by our users for usage on the site only. In order to protect user privacy, our User Agreement prohibits using: 1. Automated or manual means to view an excessively high number of profiles or mini-profiles. 2. Automated means to run searches to collect or store data obtained from our site. We have placed a restriction on your account until you agree to stop using these or similar methods to view pages on LinkedIn. We look forward to your reply to discuss this further. Sincerely, LinkedIn Privacy Team It looks like LinkedIn has suspended my account because of something that their component is doing! I do not know if this is an isolated case, or if it will happen more as more users get on Outlook 2010 and update to the new software, but watch out. Has anyone else been suspended who has installed the Office 2010 RTM and the LinkedIn Add-On? Technorati Tags: Fail,LinkedIn,Outlook 2010

    Read the article

  • Automatically connect to VPN when initiating RDP Remote Desktop connection and then disconnect VPN when done

    - by Josh Newman
    I know I can create a batch file to initiate a VPN connection followed by an RDP session, however I want to know if it's possible (in Windows 7 and ideally Windows XP as well) to have the VPN connection tied to the RDP session status. Scenario: user has to VPN first in order to be able to RDP. Ideally user would click one icon (batch file?) to initiate VPN connection and load RDP session. When they close the RDP session I want the VPN to then automatically disconnect so they don't accidentally route their subsequent non-RDP browsing + Internet activity through the VPN.

    Read the article

  • The Hot-Add Memory Hogs

    - by Andrew Clarke
    One of the more difficult tasks, when virtualizing a server, is to determine the amount of memory that Hypervisor should assign to the virtual machine. This requires accurate monitoring and, because of the consequences of setting the value too low, there is a great temptation to err on the side of over-provisioning. This results in fewer guest VMs and, in fact, with more accurate memory provisioning, many virtual environments could support 30% more VMs. In order to achieve a better consolidation (aka VM density) ratio, Windows Server 2008 R2 SP1 has introduced what Microsoft calls ‘Dynamic Memory’. This means that the start-up RAM VM memory assigned to guest virtual machines can be allowed to vary according to demand, changing dynamically while the VM is running, based on the workload of applications running inside. If demand outstrips supply, then memory can be rationed according to the ‘memory weight’ assigned to the guest VM. By this mechanism, memory becomes a shared resource that can be reallocated automatically as demand patterns vary. Unlike VMWare’s Memory Overcommit technology, the sum of all the memory allocations to each virtual machine will not exceed the total memory of the host computer. This is fine for applications that are self-regulating in their demands for memory, releasing memory back into the 'pool' when not under peak load. Other applications however, such as SQL Server Standard and Enterprise, are by nature, memory hogs under high workload; they can grab hot-add memory whilst running under load and then never release it. This requires more careful setting-up and the SQLOS team have provided some guidelines from for configuring SQL Server in virtual environments. Whereas VMWare’s Memory Overcommit is well-proven in a number of different configurations, Hyper-V’s ‘Dynamic Memory’ is new. So far, the indications are that it will improve the business case for virtualizing and it is probably a far more intuitive technology for the average IT professional to grasp. It is certainly worth testing to see whether it works for you.

    Read the article

  • conky stopped displaying after daul monitors setup -- works when I detect monitors

    - by synaptik
    I just recently installed Ubuntu 12.04 on a clean install. I previously was using 11.10. I am also using a new laptop with a Dell docking-station and two external monitors. When I try to use the .conkyrc file that I used previously, my conky display simply doesn't show up anywhere. However, after I went to System Settings Displays and made some slight change that caused the monitors to refresh, then conky appeared as it should. Here is my .conkyrc file: background yes use_xft yes xftfont DejaVu Sans Mono:size=8 xftalpha 0.8 out_to_console no update_interval 2.0 total_run_times 0 draw_shades no short_units yes # Create own window instead of using desktop (required in nautilus) own_window yes # If own_window is yes, you may use type normal, desktop or override own_window_type override # Use pseudo transparency with own_window? own_window_transparent yes double_buffer yes default_color f0e68c color1 white color2 AD0303 alignment bottom_left gap_x 2 gap_y 30 no_buffers yes use_spacer right pad_percents 3 xftfont Terminus:size=10 TEXT $stippled_hr cpu1: ${color1}${cpu cpu1}% ${color} cpu2: ${color1}${cpu cpu2}% ${color} load: ${color1}$loadavg ${color} hot proc: ${color1}${top cpu 1}% - ${top name 1}${color} $stippled_hr big proc: ${color1}${top_mem mem_res 1} - ${top_mem name 1}${color} memory: ${color1}$mem/$memmax $memperc%${color} $stippled_hr disk: ${color1}${fs_used /}/${fs_size /}${color} swap: ${color1}${swap}/${swapmax}${color} ${diskiograph_read 15,120 color1 0077ff 750} ${diskiograph_write 15,120 color1 0077ff 750} $stippled_hr download: ${color1}${downspeed wlan0} /s${color} ${downspeedgraph eth0 20,120 104E8B 0077ff} upload: ${color1}${upspeed wlan0} /s${color} ${upspeedgraph eth0 20,120 104E8B 0077ff} How can I fix it so that I don't have to tamper with the Displays settings in order for conky to show up?

    Read the article

  • Google Analytics checkout page tracking problem

    - by Amir E. Habib
    I am running a multilingual website, each lang on a different domain name. I am trying to lead all purchase requests to the checkout progress, which has its own domain too. In order to keep Google Analytics tracking I've updated the Google Analytics code accordingly. I set the source domain to 'multiple top-level domains'. Everything is going fine so far unless in E-commerce Overview; the "Sources / Medium" is always showing as (direct) - or the name of the source domain. Since I am redirecting using PHP header(location:.. etc.) the Google _link method doesn't seem to be working properly - I want to focus on two questions: Should I create a new profile for the checkout domain in Google Analytics? (I am now using the profile ID of the source domain even though I move to the checkout domain, si that OK?) When I'm trying to pass the cookies of the source domain to the checkout domain, I notice that the Google cookies are copied to the new domain (the cookie path is .checkout-domain/) and they have the same values of the original cookies - But for some reason another set of cookies is created once I access a page with google analytics code in the checkout pages, with different values (same path). Feels like I'm doing something wrong here, so my question is - What am I doing wrong here? Does anyone have an idea how to pass the cookies to the checkout domain?

    Read the article

  • How to share internet over VPN and inside a virtual machine (Windows)?

    - by mountrix
    ` My final goal is to have a virtual machine at work in which anything that happen inside (tcp, udp, ping, ...) will use the Internet connection of a computer at home. So, if inside this VM should I open an Internet browser to a site such as "show my IP", my home IP should be printed. I am also looking for a way to debug/develop a software inside this VM, but I would like to tunnel only the connections of this software, not the full graphical interface, this is why a Remote Desktop solution won't fit me. The connection between the both computer should be secured somehow, like in a SSH tunnel. This ultimately should allow me to have a portable VM in which I can connect to whatever networks I have access at home, in a secure way. This is my configuration: At work, I have a LAN-connected desktop computer, with Windows 7 Professional Edition as a host [computer W] On this same computer, I have a Virtual Box machine running Windows XP [computer V] At home, I have a laptop computer, running Windows 7 Home Edition [computer H] This laptop is connected to a Livebox 2 broadband modem by Wifi. What I am trying to do is to sit at work in front of the virtual machine [V], and connect to a webpage as if the request was issued from the laptop [H] at home, and the data should be securely tunneled between the both. But if I am using internet directly inside [W], it should use the normal LAN interface at work. To achieve my goal, I first try using VPN, than SSH tunneling, without success. I first tried to install Teamviewer between [W] and [H]. This is working fine, I can send files, share desktop, etc. Teamviewer has a VPN mode that creates a new VPN network interface with its own IP, both on computer [W] and [H]. This allowed me to connect [H] as a network computer inside [W] and I was able to share files, but not to share Internet. At this point, I tried to use from [W] the Internet as if I was at home. I setup a route (using route add from command line in [W]) in order to instruct each packet going to a given website to pass by the new VPN interface on [W], with the hope it will be forwarded to [H], but the webpage was simply inaccessible. I then tried to setup a Windows VPN connection between [W] and [H], using the Windows 7 VPN feature. [H] was the server and [W] the client. But it failed: I got the "Unable to join a remote PC while trying to VPN" 720 Error when I was setting up the client on [W]. I think the problem is the Livebox 2 that could blocks the packets. But I am not sure of this: 1) with Teamviewer it works fine, 2) Livebox 2 has a configuration page for port mapping that gives the proper configuration to map VPN ports as an example so I guess that it should allow it, 3) I opened the ports 1723 (TCP) and 500 (UDP) according to some forums. Virtual box has a network configuration parameter in which I can use the VPN network interface created by Teamviewer as a bridged connection. This is suppose to work in the sense that all packets issued by the virtual machine [V] is supposed to go directly to [H]. But I had no internet connection inside [V]. Using the NAT mode, [V] has internet. For me this is the feature that I look for: filtering all connections from the virtual box application to the VPN network interface, and the remaining should use the normal LAN interface. Apart from the build-in feature of VBox, I even do not know if it is possible to route the packet from a given application to a given interface. Finally I tried also SSH tunneling, but this is not the solution I looked for. Using an external SSH server (Linux), I was able to create a localhost connection on [W] (or [V]), using something like 'ssh -N -D server[H]' in order to allow a web browser located in [W] to connect to any website using the SOCKS 5 proxy created locally (SOCKS is a build-in feature of SSH). But repeating the same operation on windows, using a windows SSH server inside [W] (I tried freeSSHd), it failed: SFTP worked, but not the SOCKS tunneling, it was like the browser in [H] did not find internet. Finally only Teamviewer looked able to create a VPN between [W] and [H], but I am not able to use it, as I want, I mean using the Internet connection of [H] sitting in front of [W]. I also tried to bridge the VPN interface and the wifi interface inside [H], but it blocked my laptop, and I tried also the Internet Connection Sharing, trying to share on [H] the wifi connection over the VPN interface. This fails also, but it seems because Teamviewer actually use the wifi interface to be able to provide the VPN link, so I guess I am creating a recursive loop. I do not know what to try next... Thank you for any advice!!

    Read the article

  • Incremental backups in Quickbooks 2005

    - by Nathan DeWitt
    My church uses Quickbooks 2005. They have a backup to a 512 MB thumbdrive. They have been backing up about every week for the past 18 months. The filesize of the backups have grown from 14 MB to about 23 MB. I was planning on giving them a 1 or 2 GB thumb drive and calling it a day, but when I dumped this info into Excel and projected out the growth rate, I found that we'll hit 1 GB in July, and 10 GB in about another 18 months, and then 100 GB about 18 months after that. It looks to me like Quickbooks saves all the transactions with every backup. Is there a way to force incremental backups? If this is the way it is, that's fine, but I'd rather not keep buying another order of magnitude of storage space every 18 months. Can I safely delete the previous backups, and just keep the recent 2 or 3 months worth? Thanks.

    Read the article

< Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >