Search Results

Search found 13068 results on 523 pages for 'copy paste'.

Page 330/523 | < Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >

  • 2D Image Creator for a video game

    - by user1276078
    I need to make a few images for an arcade video game I'm making in Java. As of right now, I have drawings that animate, but there are two problems. The drawings are horrible, and as a result, the game won't get enough attention. It's a pain to have to change each coordinate for the drawing, as the drawings are fairly complex. I'd like to use images. I feel they could solve my problem. They would look better than the drawings, and it would only have an x and a y coordinate, rather than the many coordinates I need for each drawing. So, in a sense, I have two questions. Would images actually help? Would they solve my 2 problems? I just want to clarify. How would I make these images. I don't think I can copy them off of the internet because I plan on publishing this game. So, is there any software where you can make your own images? (It has to be in an image type that Java can support. I'm working with java). It also, as stated by the header, needs to be a 2D image; not 3D

    Read the article

  • In virtualbox, I can't access the dvd drive to install a guest host

    - by user211062
    I have installed a fresh copy of Ubuntu Server 12.04 and VirtualBox 4.3. I have set up a VM called "MediaServer" and tried to start it. I then get the following error: Cannot open host device '/dev/sr0' for readonly access. Check the permissions of that device ('/bin/ls -l /dev/sr0'): Most probably you need to be member of the device group. Make sure that you logout/login after changing the group settings of the current user (VERR_ACCESS_DENIED) I have looked all over the Internet and have been unable to find a solution. Using Webmin, I tried changing the group settings so that my user name was in the "vboxusers" group, but that did not work either. I tried various other changes in group settings and none of them worked. Also, I tried rebooting the server after the changes and that didn't work either. I have been following a guide on how to set up an Ubuntu server from the website "linuxhomeserverguide.com" and when it came to the section where you could finally set up your first virtual machine, I am stumped. I would really appreciate it if someone could help me. Thanks in advance.

    Read the article

  • Attributes of an Ethical Programmer?

    - by ahmed
    Software that we write has ramifications in the real world. If not, it wouldn't be very useful. Thus, it has the potential to sweep across the world faster than a deadly manmade virus or to affect society every bit as much as genetic manipulation. Maybe we can't see how right now, but in the future our code will have ever-greater potential for harm or good. Of course, there's the issue of hacking. That's clearly a crime. Or is it that clear? Isn't hacking acceptable for our government in the event of national security? What about for other governments? Cases of life-and-death emergency? Tracking down deadbeat parents? Screening the genetic profile of job candidates? Where is the line drawn? Who decides? Do programmers have responsibility for how their code is used? What if a programmer writes code to pry into confidential information or copy-protected material? Does he bear responsibility along with the person who used the program? What about a programmer who knowingly or unknowingly writes code to "fix the books?" Should he be liable?

    Read the article

  • No space left on disk

    - by Ned
    folks. I'm trying to copy/move files to an external 1 TB hard drive with about 50 GB remaining space. I receive a "no space left on disk" when I try. I've moved files off and retried, but still get the same message. Disk Usage Analyzer, Properties, and freeware Treesize all report available hard drive space of about 50 GB. I've tried df -i (50 GB available) and df -k, with the latter reporting only 1% of inode usage. I've been able to save files from Firefox to the drive also. I can't even rename files without getting the message. Yesterday in the midst of trying to figure this out I tried to move 4 files to the drive and got the message. Today, I found them on the drive. What's up with that? (That's the only time that has happened to my knowledge.) Is this an ubuntu problem? or is my hard drive just about to fail because of something like a controller problem? Any thoughts would be appreciated.

    Read the article

  • dvcs - is "clone to branch" a common workflow?

    - by Tesserex
    I was recently discussing dvcs with a coworker, because our office is beginning to consider switching from TFS (we're a MS shop). In the process, I got very confused because he said that although he uses Mercurial, he hadn't heard of a "branch" or "checkout" command, and these terms were unfamiliar to him. After wondering how it was possible that he didn't know about them and explaining how dvcs branches work "in place" on your local files, he was quite confused. He explained that, similar to how TFS works, when he wants to create a "branch" he does it by cloning, so he has an entire copy of his repo. This seemed really strange to me, but the benefit, which I have to concede, is that you can look at or work on two branches simultaneously because the files are separate. In searching this site to see if this has been asked before I saw a comment that many online resources promote this "clone to branch" methodology, to the poster's dismay. Is this actually common in the dvcs community? And what are some of the pros and cons of going this way? I would never do it since I have no need to see multiple branches at once, switching is fast, and I don't need all the clones filling up my disk.

    Read the article

  • Oops! installer misses a lib during OIF 11g install under some conditions

    - by user12674042
    If you installed OIF 11g on OEL 6.2 64bit and passed all the interesting gotchas but got stumped by this error in the WLS admin logs, and Enterprise Manager refuses to start correctly after what appeared to be a full successful install and configuration:  ...  <User defined listener oracle.sysman.eml.app.ContextInitializer failed: java.lang.NoClassDefFoundError: HTTPClient/ProtocolNotSuppException. java.lang.NoClassDefFoundError: HTTPClient/ProtocolNotSuppException     at oracle.sysman.eml.app.ContextInitializer.contextInitialized(ContextInitializer.java:1035) ... Caused By: java.lang.ClassNotFoundException: HTTPClient.ProtocolNotSuppException     at weblogic.utils.classloaders.GenericClassLoader.findLocalClass(GenericClassLoader.java:297)     at weblogic.utils.classloaders.GenericClassLoader.findClass(GenericClassLoader.java:270) ... <Error> <Deployer> <BEA-149231> <Unable to set the activation state to true for the application 'em'. weblogic.application.ModuleException:     at weblogic.servlet.internal.WebAppModule.startContexts(WebAppModule.java:1520) ... The problem is the installer fails to properly place a required jar (http_client.jar) in the appropriate location for your WLS domain.   Assuming you have Oracle DB installed on the same server, just copy the jar to the lib folder in your domain, e.g. if your domain is IDMDomain and middleware install location is /u01/Middleware then cp /u01/app/oracle/product/11.2.0/db_1/oui/jlib/http_client.jar \   /u01/Middleware/user_projects/domains/IDMDomain/lib and restart your admin WLS.  Enterprise Manager will start to work.  Hopefully this will save others some time while on the bleeding edge...

    Read the article

  • Setting up group disk quotas

    - by Ray
    I am hoping to get some advice in setting up disk quotas. So, I know about: Adding usrquota and grpquota on to /etc/fstab for the file systems that need to be managed. Using edquota to assign disk quotas to users. However, I need to do the last step for multiple users and edquota seems to be a bit troublesome. One solution that I have found is that I can do: sudo edquota -u foo -p bar. This will copy the disk quota of bar to user foo. I was wondering if this is the best solution? I tried setting up group disk quotas but they don't seem to be working. Are group quotas meant to help in the assignment of the same quota to multiple users? Or are they suppose to give a total limit to a set of users? For example, if users A, B, C are in group X then assigning a quota of 20 GB gives each user 20 GB or does it give 20 GB to the entire group X to divide up? I'm interested in doing the former, but not the latter. Right now, I've assigned group disk quotas and they aren't working. So, I guess it is due to my misunderstanding of group disk quotas... My problem is I want to easily give the same quota to multiple users; any suggestions on the best way to do this out of what I've tried above or anything else I may not have thought of? Thank you!

    Read the article

  • SQL Source Control Contest

    - by Ajarn Mark Caldwell
    If you’re a regular reader of this blog, you know that I have written several posts about how important I think it is to protect your source code, to version it, and in particular, all the aspects I like about Red Gate’s SQL Source Control product.  But for a moment, let’s take a break from my writing and I want to hear your stories.  What nightmare situation are you in, or can you imagine, where source control for your database would save the world.  Or maybe your life is not so dramatic, but you do see a challenge that, if you just had a good tool like SQL Source Control, it would go much smoother.  What’s your pain?  You have read my writings, now tell me your story, and be in the running for a free copy of SQL Source Control from Red Gate. Yes, that’s right.  Although I am just a fan of Red Gate, they have authorized me to give out a handful of licenses to blog readers who are willing to share their story by posting a comment to this blog entry.  Simply add your comment below (be sure to include a valid email address in the box that asks for that) to be entered.  The contest starts immediately and over the next few days, the best stories will win.

    Read the article

  • a flexible data structure for geometries

    - by AkiRoss
    What data structure would you use to represent meshes that are to be altered (e.g. adding or removing new faces, vertices and edges), and that have to be "studied" in different ways (e.g. finding all the triangles intersecting a certain ray, or finding all the triangles "visible" from a given point in the space)? I need to consider multiple aspects of the mesh: their geometry, their topology and spatial information. The meshes are rather big, say 500k triangles, so I am going to use the GPU when computations are heavy. I tried using arrays with vertices and arrays with indices, but I do not love adding and removing vertices from them. Also, using arrays totally ignore spatial and topological information, which I may need studying the mesh. So, I thought about using custom double-linked list data structures, but I believe doing so will require me to copy the data to array buffers before going on the GPU. I also thought about using BST, but not sure it fits. Any help is appreciated. If I have been too fuzzy and you require other information feel free to ask.

    Read the article

  • SSD on USB 3.0 doesnt always mount

    - by juergen
    I would like to ask you for some support for this issue. The SSD I use is apparently slightly damaged. Atm it shows this popular Corsair Series 3 problem. symptom: the disk is working for some minutes and then just suddenly stops. Two months ago it worked like a charm. This was my boot device in these days. So now I still need to copy some data out of the device which leads me to my question. Usually the SSD is working again for some minutes, when i unplug and replug its power. It gets detected by the system again and I can continue my backup. The problem on Ubuntu is, that it is not recognized again after two or three replugs. I have to reboot to bring it back. To specify this situation: when i unplug and replug my USB mouse before i reboot it is not recognized as well. So something seems to be wrong with the system also. my question: how do i fix this rebooting issue? I will post all the logs you need to analyse. The problem doesnt depend on the interface. I tried USB3.0, 2.0 and Sata already. Thank you! cheers my system: Ubuntu 11.10, Gigabyte G41MT-S2P, Intel E6750 Corsair Force Series 2 - F120, Digitus USB 3.0 Adapter, USB 3.0 interface card

    Read the article

  • how do I fix ati driver issue

    - by Michael
    I have little knowledge about ubuntu, but I am learning. Recently, my ubuntu 12.04 required that I update. I noticed that it was updating xorg and other things, after the update I was asked to reboot. I did. now when I start that computer up, it cannot detect any display and refuses to boot from usb. I have no idea how to recover my computer. It simply starts in what is akin to the terminal program and ask for my username and password. After that, I can enter commands. The problem may be because I manually installed drivers by copy pasting from some instructional that I now have no idea where they are, but what it amounts to is that I "built" the drivers. They were working GREAT until this update. I did this because otherwise I was unable to have sound through my hdmi. After I hand built the drivers, they worked great. After this update Ka-blam! nothing... Also, I had okay-ed lots of update sources (though all of them [xorg and such] were the stable versions, not the unstable update sources) Please help and thank you in advance. (if this is posted incorrectly or misplace, please let me know what to do)

    Read the article

  • Oh snap! My RPi was upgraded to 512MB! Woo-hoo!

    - by hinkmond
    I ordered a Raspberry Pi Model B (256MB) over 4 months ago on backorder. When it finally came I saw it was upgraded to the new half a gig model! Woot! But, all was not perfect. Gary C. told me the shipped configuration of the new RPi models didn't have the right firmware for 512MB, and I had to upgrade the start.elf in the /boot directory to recognize all of the 512MB RAM. I did a "free" command, and sure enough saw only 240MB. Sadness. But, Gary gave me a copy of his start.elf which worked after some trail and error. For anyone ordering the new RPi Model B w/512MB, here are the steps to get you going with full 512MB RAM: sudo apt-get update --fix-missing sudo apt-get upgrade --fix-missing # NOTE: This step takes at least a couple hours on a # fast network wget https://raw.github.com/raspberrypi/firmware/\ 164b0fe2b3b56081c7510df93bc1440aebe45f7e/boot/\ arm496_start.elf sudo mv /boot/start.elf /boot/orig-start.elf sudo mv arm496_start.elf /boot/start.elf sudo reboot free total used free shared buffers cached Mem: 497768 210596 287172 0 16892 169624 -/+ buffers/cache: 24080 473688 Swap: 102396 0 102396 So of course this means... (drumroll) there is now 498MB available for the Java Embedded heap! java -Xmx400m -version java version "1.7.0_06" Java(TM) SE Embedded Runtime Environment (build 1.7.0_06-b24, headless) Java HotSpot(TM) Embedded Client VM (build 23.2-b09, mixed mode) Yeah, baby! Hinkmond

    Read the article

  • Having the same texture data in different ID3D11Texture2D

    - by bdmnd
    Sorry if this has been answered elsewhere - I'm rather new to DX. My question concerns conservation of resources - specifically textures in VRAM. I assume that upon returning from a call to CreateTexture2D, a copy of any textures data supplied has been copied elsewhere, likely VRAM. Does DX11 have any facility for having multiple ID3D11Texture2D objects which point to the same data? This might at first seem silly, but imagine a ID3D11Texture2D which is an array of textures. In one material, an artist has chosen to blend three identically sized maps, saved on disk as A.dds, B.dds, and C.dds. Then imagine they have another material which also uses three maps, but this time A.dds, B.dds, and D.dds. The shader code knows the diffuse texture is a texture array, and also has the number of layers baked (three in each case). I would essentially like to set up just two ID3D11Texture2D objects, one for each material, but I don't want to waste VRAM for two identical copies of A.dds and B.dds. I could use explicit texture arrays, of course, but this reduces the number of resources available to the shader and can complicate code somewhat more than would otherwise be needed.

    Read the article

  • What's the best way to manage reusable classes/libraries separately?

    - by Tom
    When coding, I naturally often come up with classes or a set of classes with a high reusability. I'm looking for an easy, straight-forward way to work on them separately. I'd like to be able to easily integrate them into any project; it also should be possible to switch to a different version with as few commands as possible. Am I right with the assumption that git (or another VCS) is best suited for this? I thought of setting up local repositories for each class/project/library/plugin and then just cloning/pulling them. It would be great if I could reference those projects by name, not by the full path. Like git clone someproject. edit: To clarify, I know what VCS are about and I do use them. I'm just looking for a comfortable way to store and edit some reusable pieces of code (including unit tests) separately and to be able to include them (without the unit tests) in other projects, without having to manually copy files. Apache Maven is a good example, but I'm looking for a language-independent solution, optimally command-line-based.

    Read the article

  • Structuring cascading properties - parent only or parent + entire child graph?

    - by SB2055
    I have a Folder entity that can be Moderated by users. Folders can contain other folders. So I may have a structure like this: Folder 1 Folder 2 Folder 3 Folder 4 I have to decide how to implement Moderation for this entity. I've come up with two options: Option 1 When the user is given moderation privileges to Folder 1, define a moderator relationship between Folder 1 and User 1. No other relationships are added to the db. To determine if the user can moderate Folder 3, I check and see if User 1 is the moderator of any parent folders. This seems to alleviate some of the complexity of handling updates / moved entities / additions under Folder 1 after the relationship has been defined, and reverting the relationship means I only have to deal with one entity. Option 2 When the user is given moderation privileges to Folder 1, define a new relationship between User 1 and Folder 1, and all child entities down to the grandest of grandchildren when the relationship is created, and if it's ever removed, iterate back down the graph to remove the relationship. If I add something under Folder 2 after this relationship has been made, I just copy all Moderators into the new Entity. But when I need to show only the top-level Folders that a user is Moderating, I need to query all folders that have a parent folder that the user does not moderate, as opposed to option 1, where I just query any items that the user is moderating. I think it comes down to determining if users will be querying for all parent items more than they'll be querying child items... if so, then option 1 seems better. But I'm not sure. Is either approach better than the other? Why? Or is there another approach that's better than both? I'm using Entity Framework in case it matters.

    Read the article

  • C# Threading Background Process - Programming - How to?

    - by Magic
    Hello...I have been given the horrible task of doing this. Launch the website Take a screenshot Fill in the form details, click on Next Take a screenshot ... ... ... Rinse. Repeat. Now, with various combinations, this comes up to 300 screenshots. And I have to do this for 4 different browsers. Chrome, Firefox, IE 6 and IE 7. I cannot use tools which will capture the screenshot and store them, such as, SnagIT. I need to take a screenshot, copy it to a Word Document and take the second screenshot and take it to a Word Document. I thought, I will write a tiny utility which will help me do this. Here is the requirement spec that I put up for it - An executable which once launched seats itself in the System Tray. While it is active, all instances of Key Press (Print Scrn), it should write the contents to a Word Document as defined (either a default path or a user defined one). Save the document periodically. Now, my question is - if I am going to develop this using C# (Winforms application), how do I go about doing this. I can do a fair bit of C# programming and I am willing to learn. But I am not able to locate the references for how to do a background process so that it runs in the background. And while it runs, it has to capture the Print Scrn command. Can you folks point me to the right material where I can learn this? Theoretical references should suffice. But if there are practical references, then nothing like it. Thanks!

    Read the article

  • Run Grunt task in Visual Studio Release Build with a bat file

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/08/19/run-grunt-task-in-visual-studio-release-build-with-a.aspx 1. Add a BeforeBuild in your csproj file. Edit the xml with a text editor. <Target Name="BeforeBuild"> <Exec Condition="'$(Configuration)' == 'Release'" Command="script-optimize.bat" /> </Target> 2. Create the script-optimize.batREM "%~dp0" maps to the directory where this file exists cd %~dp0\..\YourProjectFolder call npm uninstall grunt call npm uninstall grunt call npm install --cache-min 604800 -g grunt-cli call npm install --cache-min 604800 grunt typescript requirejs copy less:compile less:mincompileThis grunt command will compile typescript, run the requireJs optimizer, complie and minimize less.3. Make it use the minified code when the Web.config compilation debug is set to false <!-- These CustomCollectFiles actions are used so that the Scripts-Release folder/files are included        when publishing even though they are not project references -->  <Target Name="CustomCollectFiles">    <ItemGroup>      <_CustomFiles Include="Scripts-Release\**\*" />  </ItemGroup>  </Target> That should be all you need to get a Grunt task to minify and combine JS (plus other tasks) in Visual Studio Release build with debug = false. This is a great video of Steve Sanderson talking about SPAs, npm, Knockout, Grunt, Gulp, ect. I highly recommend it.

    Read the article

  • How do I write to an outer truecrypt volume when the inner volume protection prevents writng?

    - by con-f-use
    In a nutshell After some time using the outer volume of a hidden volume in Truecrypt I cannot write to the outer volume anymore. The protection of the inner volume always kicks in before. How do I fix this? Details I'm using truecrypt's two layered encryption of a USB stick. The outer container carries my semi-sensitive stuff while the inner hidden values has a bit more valuable information. I use both, the inner and outer volume regularly and that is part of the problem. Truecrypt can mount the outer volume for writing while protecting the inner. Usually the inner volume, when not protected this way (or mounted read-only) would be indistinguishable from free space. That is of course part of the plausible deniability scheme of truecrypt. At the beginning, everything worked as expected. I could copy and delete data to the outer volume as I pleased. Now it seams that I have written and deleted enough data to have filled the outer volume once. Despite the write protection Ubuntu tries now to write to the continuous "free space" that is the inner volume. It does that although enough other free space is on the outer volume. But on this free space there used to be data so its fragmented and the file system write prefers continuous space. The write on the continuous free space of the outer volume of course fails (with the error message in the picture above) as Truecrypt's inner-volume-protection kicks in. The Question I know this is expected behaviour, but is there a better way to write to the outer volume that does not attempt to write to the hidden free space at the end? The whole question could be more generally rephrased to: How do I control, where on a partition data is written in Ubuntu?

    Read the article

  • Microsoft Silverlight 4 Data and Services Cookbook &ndash; Book Review (sort of)

    - by Jim Duffy
    I just received my copy of the Microsoft Silverlight 4 Data and Services Cookbook, co-authored by fellow Microsoft Regional Director, Gill Cleeren, and at first glance I like what I see. I’ve always been a fan of the “cookbook” approach to technical books because they are problem/solution oriented. Often developers need solutions to solve specific questions like “how do I send email from within my .NET application” and so on, and yes, that was a blatant plug to my article explaining how to accomplish just that, but I digress. :-) I also enjoy the cookbook approach because you can just start flipping pages and randomly stop somewhere and see what nugget of information is staring up at you from the page. Anyway, what I like about this book is that it focuses on a specific area of Silverlight development, accessing data and services.  The book is broken down into the following chapters: Chapter 1: Learning the Nuts and Bolts of Silverlight 4 Chapter 2: An Introduction to Data Binding Chapter 3: Advanced Data Binding Chapter 4: The Data Grid Chapter 5: The DataForm Chapter 6: Talking to Services Chapter 7: Talking to WCF and ASMX Services Chapter 8: Talking to REST and WCF Data Services Chapter 9: Talking to WCF RIA Services Chapter 10: Converting Your Existing Applications to Use Silverlight As you can see this book is all about working with Silverlight 4 and data. I’m looking forward to taking a closer look at it. Have a day. :-|

    Read the article

  • XNA - Obtaining depth from the scene's render target?

    - by user1423893
    I'm currently rendering my scene to a render target so it can be used for rendering methods such as post processing and order independent transparency. rtScene = new RenderTarget2D( GraphicsDevice, GraphicsDevice.PresentationParameters.BackBufferWidth, GraphicsDevice.PresentationParameters.BackBufferHeight, false, SurfaceFormat.Rgba64, DepthFormat.Depth24Stencil8, // Requires a depth format for objects to be drawn correctly (e.g. wireframe model surrounding model) 0, RenderTargetUsage.PreserveContents ); I am required to use RenderTargetUsage.PreserveContents so that the same render target can be rendered to multiple times, once for each of the draw methods below. DrawBackground DrawDeferred DrawForward DrawTransparent The problem is that DrawTransparent requires a copy of the scene's depth as a texture. Is there any way to obtain this from the scene render target above (rtScene)? I can't have more than one render target with RenderTargetUsage.PreserveContents as this causes problems on hardware such as the XBOX 360, so rendering the depth to a separate render target at the same time as I render the scene isn't possible as far as I can tell. Would I be able to get around this problem by "Ping-Ponging" two render targets (using the more compatible RenderTargetUsage.DiscardContents) and using the result for the depth texture?

    Read the article

  • 64kb limit on the size of MSMQ Multicast Messages

    - by John Breakwell
    When Windows 2003 came out, Microsoft introduced the ability to broadcast messages to any machines that were listening back. All you had to do was send out a message on a particular port and IP address and any client that had set up a Multicast queue with matching port and IP address would get a copy. Since its introduction, there have been a couple of security vulnerabilities that needed to be removed: Microsoft Security Bulletin MS06-052 Vulnerability in Pragmatic General Multicast (PGM) Could Allow Remote Code Execution (919007) Microsoft Security Bulletin MS08-036 Vulnerabilities in Pragmatic General Multicast (PGM) could allow denial of service (950762) The second of these, MS08-036, was resolved through an undocumented change in functionality. Basically, a limit of 64kb was put on the maximum size of a message that could be broadcast using the Multicast method. Obviously this has caused a few problems for any existing MSMQ Multicast applications that expected to be able to send larger messages. A hotfix has been developed to resolve this problem. 961605 FIX: Multicast messages larger than 64 kilobytes (KB) are not delivered as expected by using Message Queuing 3.0 after security update MS08-036 is installed A registry change is required: Open the registry with Regedit Navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\RMCAST\Parameters\ Create a DWord called MaxpacketSize Set the value to the desired number of bytes. You can set it to a value between zero and 4MB. If you specify anything above 4MB, it will default to 64K. A reboot is needed after adding this value.

    Read the article

  • What are some internet trends that you've noticed over the past ~10 years? [closed]

    - by Michael
    I'll give an example of one that I've noticed: the number of web sites that ask for your email address (GOOG ID, YAHOO! ID, etc.) has skyrocketed. I can come up with no legitimate reason for this other than (1) password reset [other ways to do this], or (2) to remind you that you have an account there, based upon the time of your last visit. Why does a web site need to know your email address (Google ID, etc.) if all you want to do is... download a file (no legit reason whatsoever) play a game (no legit reason whatsoever) take an IQ test or search a database (no legit reason whatsoever) watch a video or view a picture (no legit reason whatsoever) read a forum (no legit reason whatsoever) post on a forum (mildly legit reason: password reset) newsletter (only difference between a newsletter and a blog is that you're more likely to forget about the web site than you are to forget about your email address -- the majority of web sites do not send out newsletters, however, so this can't be the justification) post twitter messages or other instant messaging (mildly legit reason: password reset) buy something (mildly legit reasons: password reset + giving you a copy of a receipt that they can't delete, as receipts stored on their server can be deleted) On the other hand, I can think of plenty of very shady reasons for asking for this information: so the NSA, CIA, FBI, etc. can very easily track what you do by reading your email or asking GOOG, etc. what sites you used your GOOG ID at to use the password that you provide for your account in order to get into your email account (most people use the same password for all of their accounts), find all of your other accounts in your inbox, and then get into all of those accounts sell your email address to spammers These reasons, I believe, are why you are constantly asked to provide your email address. I can come up with no other explanations whatsoever. Question 1: Can anyone think of any legitimate or illegitimate reasons for asking for someone's email address? Question 2: What are some other interesting internet trends of the past ~10 years?

    Read the article

  • All in a Day's Work: Unblocking Multiple Downloaded Files with a Single Command

    - by Sam Abraham
    Files downloaded using Internet Explorer retain Internet Zone permission level and hence are “Blocked” by default on Windows 7 machines. Honestly, while an added overhead for developers; I really appreciate this feature as it provides a good protection layer for casual web users. My workaround is to simply unblock the downloaded zip file (if download was a zip file) which, in turn, unblocks the files stored within. Today however, I was left with a situation where I had to “Open” and “Copy” the content rather than “Save” a zip file. That of course left me with a few dozen files I have to manually unblock. A few minutes of internet search lead me to the link below which worked like a charm: 1-Download streams.exe from SystInternals - http://technet.microsoft.com/en-us/sysinternals/bb897440.aspx 2-Go to command prompt (cmd.exe) 3-Navigate to where you have streams.exe installed 4-Use command line switches: streams.exe –s –d “<folder path>” This removed the Internet Zone restrictions from all files under “<folder path>” and its subfolders as well. [Deleted :Zone.Identifier:$DATA] References: http://social.technet.microsoft.com/Forums/en-US/itproxpsp/thread/806f0104-1caa-4a66-b504-7a681d1ccb33/

    Read the article

  • Is it viable to make a port from a C++ application to Java through LLVM

    - by Javier Mr
    how viable is it to port a C++ application to Java bytecode using LLVM (I guess LLJVM)? The thing is that we currently have a process written in C++ but a new client has made mandatory to been able to run the program in a multiplatform way, using the Java Virtual Machine with obviously no native code (no JNI). The idea is to be able to take the generated jar and copy then to different systems (Linux, Win, 32 bits - 64 bits) and it should just work. Looking around looks like it is possible to compile C++ to LLVM IR code and then that code to java bytecode. There is no need of the generated code to be readable. I have test a bit with similar things using emscripten, this takes C++ code and compile it to JavaScript. The result is valid JS but totally unreadable (looks like assambler). Does anybody done a port of an application from C++ to Java bytecode using this tecnique? What problems could we face? Is a valid approach for production code? Note: I am aware that currently we have some non standard C++ and close source libraries, we are looking to removing this non standard code and all close source libraries and use Free Libre Open Source Software, so lets suppose all code is standard C++ code with all code available at compile time. Note: It is not an option to write portable C++ code and then compile it to the desired target platform, the compiled program must be mltiplatform thus the use of JVM (right now we are not looking in similar solutions but Python or other language base, but i would also like to heard about it)

    Read the article

  • Oracle Loader for Hadoop 1.1.0.0.3

    - by mannamal
    We are pleased to announce availability of Oracle Loader for Hadoop 1.1.0.0.3, containing bug fixes and performance improvements to Oracle Loader for Hadoop. The updated product can be downloaded from here: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/big-data-downloads-1451048.html Note that the Oracle Loader for Hadoop 1.1.0.0.3 kit is a complete kit containing the product and bug fixes. Fixes of the earlier version 1.1 patch releases are also included. Upgrading to Oracle Loader for Hadoop 1.1.0.0.3 (from versions 1.1.x): On the Oracle Big Data Appliance:  1. 1.  Upload the new oraloader rpm to the first Oracle Big Data Appliance server.  For example:   /tmp/oraloader-1.1.0.0.3-1.x86_64.rpm 2.     As the root user, use dcli from the first Oracle Big Data Appliance server to copy the new rpm to all nodes. For example:   #dcli -f /tmp/oraloader-1.1.0.0.3-1.x86_64.rpm  -d /tmp/oraloader-replace.rpm 3. 3.  As the root user, use dcli from the first Oracle Big Data Appliance server to replace the old oraloader rpm with the new one.  For example:  #dcli "rpm -e oraloader ; rpm -Uvh /tmp/oraloader-replace.rpm" On other hardware: 1. 1.  Unzip oraloader-1.1.0.0.3.x86_64.zip at <location of install> 2. 2.  Update OLH_HOME to point to <location of install>/oraloader-1.1.0.0.3 

    Read the article

< Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >