Search Results

Search found 24037 results on 962 pages for 'every'.

Page 131/962 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • SQL SERVER Four Posts on Removing the Bookmark Lookup Key Lookup

    In recent times I have observed that not many people have proper understanding of what is bookmark lookup or key lookup. Increasing numbers of the questions tells me that this is something developers are encountering every single day but have no idea how to deal with it. I have previously written three articles on this [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How To Approach 360 Degree Snake

    - by Austin Brunkhorst
    I've recently gotten into XNA and must say I love it. As sort of a hello world game I decided to create the classic game "Snake". The 90 degree version was very simple and easy to implement. But as I try to make a version of it that allows 360 degree rotation using left and right arrows, I've come into sort of a problem. What i'm doing now stems from the 90 degree version: Iterating through each snake body part beginning at the tail, and ending right before the head. This works great when moving every 100 milliseconds. The problem with this is that it makes for a choppy style of gameplay as technically the game progresses at only 6 fps rather than it's potential 60. I would like to move the snake every game loop. But unfortunately because the snake moves at the rate of it's head's size it goes way too fast. This would mean that the head would need to move at a much smaller increment such as (2, 2) in it's direction rather than what I have now (32, 32). Because I've been working on this game off and on for a couple of weeks while managing school I think that I've been thinking too hard on how to accomplish this. It's probably a simple solution, i'm just not catching it. Here's some pseudo code for what I've tried based off of what makes sense to me. I can't really think of another way to do it. for(int i = SnakeLength - 1; i > 0; i--){ current = SnakePart[i], next = SnakePart[i - 1]; current.x = next.x - (current.width * cos(next.angle)); current.y = next.y - (current.height * sin(next.angle)); current.angle = next.angle; } SnakeHead.x += cos(SnakeAngle) * SnakeSpeed; SnakeHead.y += sin(SnakeAngle) * SnakeSpeed; This produces something like this: Code in Action. As you can see each part always stays behind the head and doesn't make a "Trail" effect. A perfect example of what i'm going for can be found here: Data Worm. Not the viewport rotation but the trailing effect of the triangles. Thanks for any help!

    Read the article

  • Issues running commands

    - by Joel
    Every time I run a command I get this back. E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied) E: Unable to lock directory /var/lib/apt/lists/ E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied) E: Unable to lock the administration directory (/var/lib/dpkg/), are you root? christopher@christopher:~$ This didn't start happening until I changed my device name.

    Read the article

  • YouTube + You

    YouTube is an extremely team-oriented, creative workplace where every single employee has a voice in the choices we make and the features we implement. We work together in...

    Read the article

  • 4th Annual Hartford Code Camp - The Code Camp Manifesto lives on!

    - by SB Chatterjee
    It is amazing that Thom Robbins' blog posting back in December 2004 laid the foundation of the Code Camps that have grown world-wide - there is at least one every week-end in some country (unscientific tweets stats sampling). This week end, we at the Connecticut .NET Developers Group had the 4th Annual Hartford Code Camp and it was well attended with 120+ attendees with ~30 sessions. Our thanks to the Speakers from near and far who made our event a success.

    Read the article

  • Using TPL and PLINQ to raise performance of feed aggregator

    - by DigiMortal
    In this posting I will show you how to use Task Parallel Library (TPL) and PLINQ features to boost performance of simple RSS-feed aggregator. I will use here only very basic .NET classes that almost every developer starts from when learning parallel programming. Of course, we will also measure how every optimization affects performance of feed aggregator. Feed aggregator Our feed aggregator works as follows: Load list of blogs Download RSS-feed Parse feed XML Add new posts to database Our feed aggregator is run by task scheduler after every 15 minutes by example. We will start our journey with serial implementation of feed aggregator. Second step is to use task parallelism and parallelize feeds downloading and parsing. And our last step is to use data parallelism to parallelize database operations. We will use Stopwatch class to measure how much time it takes for aggregator to download and insert all posts from all registered blogs. After every run we empty posts table in database. Serial aggregation Before doing parallel stuff let’s take a look at serial implementation of feed aggregator. All tasks happen one after other. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();           for (var index = 0; index <blogs.Count; index++)         {              ImportFeed(blogs[index]);         }     }       private void ImportFeed(BlogDto blog)     {         if(blog == null)             return;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                 }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)         {             SaveRssFeedItem(item, blog.Id, blog.CreatedById);         }     }       private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } Serial implementation of feed aggregator downloads and inserts all posts with 25.46 seconds. Task parallelism Task parallelism means that separate tasks are run in parallel. You can find out more about task parallelism from MSDN page Task Parallelism (Task Parallel Library) and Wikipedia page Task parallelism. Although finding parts of code that can run safely in parallel without synchronization issues is not easy task we are lucky this time. Feeds import and parsing is perfect candidate for parallel tasks. We can safely parallelize feeds import because importing tasks doesn’t share any resources and therefore they don’t also need any synchronization. After getting the list of blogs we iterate through the collection and start new TPL task for each blog feed aggregation. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {          var uri = new Uri(blog.RssUrl);          var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)          {              SaveRssFeedItem(item, blog.Id, blog.CreatedById);          }     }     private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } You should notice first signs of the power of TPL. We made only minor changes to our code to parallelize blog feeds aggregating. On my machine this modification gives some performance boost – time is now 17.57 seconds. Data parallelism There is one more way how to parallelize activities. Previous section introduced task or operation based parallelism, this section introduces data based parallelism. By MSDN page Data Parallelism (Task Parallel Library) data parallelism refers to scenario in which the same operation is performed concurrently on elements in a source collection or array. In our code we have independent collections we can process in parallel – imported feed entries. As checking for feed entry existence and inserting it if it is missing from database doesn’t affect other entries the imported feed entries collection is ideal candidate for parallelization. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           feed.Channel.Items.AsParallel().ForAll(a =>         {             SaveRssFeedItem(a, blog.Id, blog.CreatedById);         });      }        private void ImportAtomFeed(BlogDto blog)      {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           feed.Entries.AsParallel().ForAll(a =>         {              SaveAtomFeedEntry(a, blog.Id, blog.CreatedById);         });      } } We did small change again and as the result we parallelized checking and saving of feed items. This change was data centric as we applied same operation to all elements in collection. On my machine I got better performance again. Time is now 11.22 seconds. Results Let’s visualize our measurement results (numbers are given in seconds). As we can see then with task parallelism feed aggregation takes about 25% less time than in original case. When adding data parallelism to task parallelism our aggregation takes about 2.3 times less time than in original case. More about TPL and PLINQ Adding parallelism to your application can be very challenging task. You have to carefully find out parts of your code where you can safely go to parallel processing and even then you have to measure the effects of parallel processing to find out if parallel code performs better. If you are not careful then troubles you will face later are worse than ones you have seen before (imagine error that occurs by average only once per 10000 code runs). Parallel programming is something that is hard to ignore. Effective programs are able to use multiple cores of processors. Using TPL you can also set degree of parallelism so your application doesn’t use all computing cores and leaves one or more of them free for host system and other processes. And there are many more things in TPL that make it easier for you to start and go on with parallel programming. In next major version all .NET languages will have built-in support for parallel programming. There will be also new language constructs that support parallel programming. Currently you can download Visual Studio Async to get some idea about what is coming. Conclusion Parallel programming is very challenging but good tools offered by Visual Studio and .NET Framework make it way easier for us. In this posting we started with feed aggregator that imports feed items on serial mode. With two steps we parallelized feed importing and entries inserting gaining 2.3 times raise in performance. Although this number is specific to my test environment it shows clearly that parallel programming may raise the performance of your application significantly.

    Read the article

  • Summit Old, Summit New, Summit Borrowed...

    - by Rob Farley
    PASS Summit is coming up, and I thought I’d post a few things. Summit Old... At the PASS Summit, you will get the chance to hear presentations by the SQL Server establishment. Just about every big name in the SQL Server world is a regular at the PASS Summit, so you will get to hear and meet people like Kalen Delaney (@sqlqueen) (who just recently got awarded MVP status for the 20th year running), and from all around the world such as the UK’s Chris Webb (@technitrain) or Pinal Dave (@pinaldave) from India. Almost all the household names in SQL Server will be there, including a large contingent from Microsoft. The PASS Summit is by far the best place to meet the legends of SQL Server. And they’re not all old. Some are, but most of them are younger than you might think. ...Summit New... The hottest topics are often about the newest technologies (such as SQL Server 2012). But you will almost certainly learn new stuff about older versions too. But that’s not what I wanted to pick on for this point. There are many new speakers at every PASS Summit, and content that has not been covered in other places. This year, for example, LobsterPot’s Roger Noble (@roger_noble) is giving a presentation for the first time. He’s a regular around the Australian circuit, but this is his first time presenting to a US audience. New Zealand’s Paul White (@sql_kiwi) is attending his first PASS Summit, and will be giving over four hours of incredibly deep stuff that has never been presented anywhere in the US before (I can’t say the world, because he did present similar material in Adelaide earlier in the year). ...Summit Borrowed... No, I’m not talking about plagiarism – the talks you’ll hear are all their own work. But you will get a lot of stuff you’ll be able to take back and apply at work. The PASS Summit sessions are not full of sales-pitches, telling you about how great things could be if only you’d buy some third-party vendor product. It’s simply not that kind of conference, and PASS doesn’t allow that kind of talk to take place. Instead, you’ll be taught techniques, and be able to download scripts and slides to let you perform that magic back at work when you get home. You will definitely find plenty of ideas to borrow at the PASS Summit. ...Summit Blue Yeah – and there’s karaoke. Blue - Jason - SQL Karaoke - YouTube

    Read the article

  • Is it bad to join open-source projects as an amateur?

    - by esqew
    I've thought for about six months now that I should join an open-source iPhone or iPad project to hone my skills in Objective-C, but every time I go to do it I see thousands of lines of code on huge projects that I end up convincing myself I would never understand. I always think that my commits would just end up being a hassle for project admins and more senior contributors, so I always back out at the last second. My question essentially is, is it a hassle when an intermediately-experienced programmer joins an open-source project?

    Read the article

  • how to reinstall/repair ubuntu 12.04 after dual boot installation fails with windows 7

    - by Rini
    I have installed Ubuntu 12.04 on my preinstalled windows 7 Sony vaio s series laptop following instructions here: http://www.linuxbsdos.com/2012/05/17/how-to-dual-boot-ubuntu-12-04-and-windows-7/ Everything went well and I am able to boot in to windows after complete installation of Ubuntu. Now following instructions on web I tried to add Ubuntu to my BIOS using Easy BCD (but forget to add windows 7 entry). As a result, I loose windows 7 OS and can't boot in to either OS then I successfully repaired windows 7 using recovery CD. Now my problem is that I can't reinstall Ubuntu 12.04 using Live CD it halts every time before disk partition step giving error. "ubi-partman crashed". "ubi-partman failed with exit code 141. further information may be found in /var/log/syslog. Do you want to try running this step again before continuing? If you do not, your installation may fail entirely or may be broken." and, any choice to continue will result in the same error. I looked in to /var/log/syslog but not able to understand what is error. Then, I ran sudo fdisk -l to view my partitions and it shows me only one partition. Probably, all the partitions I created for Ubuntu 12.04 are lost while running windows 7 recovery CD. So, I don't know whether the Ubuntu is still there or probably corrupted. My boot-info URL is: http://paste.ubuntu.com/1202146/ Please tell me how to remove this error so that I can reinstall/repair Ubuntu 12.04 Thanks in advance. R Shukla My boot-info URL is: http://paste.ubuntu.com/1202146/ Please tell me how to remove this error so that I can reinstall/repair Ubuntu 12.04 Thanks for your help! I tried to boot from the CD but I every time it give me error before disk partitioning step. Also, I am unable to start Gparted. "ubi-partman crashed". "ubi-partman failed with exit code 141. further information may be found in /var/log/syslog. Do you want to try running this step again before continuing? If you do not, your installation may fail entirely or may be broken." and, any choice to continue will result in the same error. I looked in to /var/log/syslog but not able to understand what is error. Then, I ran sudo fdisk -l to view my partitions and it shows me only ne partition. Probaply, all the partitions I created for Ubuntu 12.04 are lost while running windows 7 recovery CD. Please tell me how to remove this error. Best Regards, R S

    Read the article

  • Steps To Modify Popular Themes On Windows XP

    Personal computer users are familiar with the various exciting themes available with all versions of Microsoft Windows XP.In fact, almost each and every version of this hugely popular operating syste... [Author: Steffen Anderson - Computers and Internet - March 29, 2010]

    Read the article

  • VS for Database Pros (GDR R2) Removes Sproc Comments (2 replies)

    I have been working with my team to implement Data Dude GDR R2 for managing ALL of the databases for our applications. So far I am very pleased by what we can do with the tool with a single exception. I want to have a header with comments as part of every stored procedure so we can track the history of a procedure. When creating a deployment script, and subsequently running it, Data Dude strips ou...

    Read the article

  • Indie Games See The Linux Market

    <b>Blog of Helios:</b> "Sure, I've played all the repository shooters...bloody chunks flying and monsters galore. I have a short attention span...mostly because I suck at shooter games. I just don't play them often. But every now and then, one game catches my eye. For this post, that game is Caster."

    Read the article

  • Always keep files updated in Eclipse

    - by AK01
    I keep lots of files/editors open in Eclipse. I also love using git stash and other git commands that essentially change the contents of my open files. Is there an Eclipse feature or plugin that will always keep the contents of my open files up to date and live? Currently if I put focus in an out of sync editor, I get an awkwardly worded dialog that I have to parse carefully every time. I wish it would just keep me synced like Textmate does.

    Read the article

  • Consistent Flash Player Crash ONLY on YouTube

    - by Aiman Mueller
    It could be similar to one of the bugs listed on LaunchPad (#689158), but may not be. Basically, I used to occasionally get a crash on YouTube and opening a new browser or rebooting (don't remember which) took care of the problem. However, now, EVERY time I try to open a video on YouTube, I get the frowning block and the message, "The Adobe Flash plugin has crashed." However, Hulu would also call for Adobe, right? But I can see videos there.

    Read the article

  • Security in Software

    The term security has many meanings based on the context and perspective in which it is used. Security from the perspective of software/system development is the continuous process of maintaining confidentiality, integrity, and availability of a system, sub-system, and system data. This definition at a very high level can be restated as the following: Computer security is a continuous process dealing with confidentiality, integrity, and availability on multiple layers of a system. Key Aspects of Software Security Integrity Confidentiality Availability Integrity within a system is the concept of ensuring only authorized users can only manipulate information through authorized methods and procedures. An example of this can be seen in a simple lead management application.  If the business decided to allow each sales member to only update their own leads in the system and sales managers can update all leads in the system then an integrity violation would occur if a sales member attempted to update someone else’s leads. An integrity violation occurs when a team member attempts to update someone else’s lead because it was not entered by the sales member.  This violates the business rule that leads can only be update by the originating sales member. Confidentiality within a system is the concept of preventing unauthorized access to specific information or tools.  In a perfect world the knowledge of the existence of confidential information/tools would be unknown to all those who do not have access. When this this concept is applied within the context of an application only the authorized information/tools will be available. If we look at the sales lead management system again, leads can only be updated by originating sales members. If we look at this rule then we can say that all sales leads are confidential between the system and the sales person who entered the lead in to the system. The other sales team members would not need to know about the leads let alone need to access it. Availability within a system is the concept of authorized users being able to access the system. A real world example can be seen again from the lead management system. If that system was hosted on a web server then IP restriction can be put in place to limit access to the system based on the requesting IP address. If in this example all of the sales members where accessing the system from the 192.168.1.23 IP address then removing access from all other IPs would be need to ensure that improper access to the system is prevented while approved users can access the system from an authorized location. In essence if the requesting user is not coming from an authorized IP address then the system will appear unavailable to them. This is one way of controlling where a system is accessed. Through the years several design principles have been identified as being beneficial when integrating security aspects into a system. These principles in various combinations allow for a system to achieve the previously defined aspects of security based on generic architectural models. Security Design Principles Least Privilege Fail-Safe Defaults Economy of Mechanism Complete Mediation Open Design Separation Privilege Least Common Mechanism Psychological Acceptability Defense in Depth Least Privilege Design PrincipleThe Least Privilege design principle requires a minimalistic approach to granting user access rights to specific information and tools. Additionally, access rights should be time based as to limit resources access bound to the time needed to complete necessary tasks. The implications of granting access beyond this scope will allow for unnecessary access and the potential for data to be updated out of the approved context. The assigning of access rights will limit system damaging attacks from users whether they are intentional or not. This principle attempts to limit data changes and prevents potential damage from occurring by accident or error by reducing the amount of potential interactions with a resource. Fail-Safe Defaults Design PrincipleThe Fail-Safe Defaults design principle pertains to allowing access to resources based on granted access over access exclusion. This principle is a methodology for allowing resources to be accessed only if explicit access is granted to a user. By default users do not have access to any resources until access has been granted. This approach prevents unauthorized users from gaining access to resource until access is given. Economy of Mechanism Design PrincipleThe Economy of mechanism design principle requires that systems should be designed as simple and small as possible. Design and implementation errors result in unauthorized access to resources that would not be noticed during normal use. Complete Mediation Design PrincipleThe Complete Mediation design principle states that every access to every resource must be validated for authorization. Open Design Design PrincipleThe Open Design Design Principle is a concept that the security of a system and its algorithms should not be dependent on secrecy of its design or implementation Separation Privilege Design PrincipleThe separation privilege design principle requires that all resource approved resource access attempts be granted based on more than a single condition. For example a user should be validated for active status and has access to the specific resource. Least Common Mechanism Design PrincipleThe Least Common Mechanism design principle declares that mechanisms used to access resources should not be shared. Psychological Acceptability Design PrincipleThe Psychological Acceptability design principle refers to security mechanisms not make resources more difficult to access than if the security mechanisms were not present Defense in Depth Design PrincipleThe Defense in Depth design principle is a concept of layering resource access authorization verification in a system reduces the chance of a successful attack. This layered approach to resource authorization requires unauthorized users to circumvent each authorization attempt to gain access to a resource. When designing a system that requires meeting a security quality attribute architects need consider the scope of security needs and the minimum required security qualities. Not every system will need to use all of the basic security design principles but will use one or more in combination based on a company’s and architect’s threshold for system security because the existence of security in an application adds an additional layer to the overall system and can affect performance. That is why the definition of minimum security acceptably is need when a system is design because this quality attributes needs to be factored in with the other system quality attributes so that the system in question adheres to all qualities based on the priorities of the qualities. Resources: Barnum, Sean. Gegick, Michael. (2005). Least Privilege. Retrieved on August 28, 2011 from https://buildsecurityin.us-cert.gov/bsi/articles/knowledge/principles/351-BSI.html Saltzer, Jerry. (2011). BASIC PRINCIPLES OF INFORMATION PROTECTION. Retrieved on August 28, 2011 from  http://web.mit.edu/Saltzer/www/publications/protection/Basic.html Barnum, Sean. Gegick, Michael. (2005). Defense in Depth. Retrieved on August 28, 2011 from  https://buildsecurityin.us-cert.gov/bsi/articles/knowledge/principles/347-BSI.html Bertino, Elisa. (2005). Design Principles for Security. Retrieved on August 28, 2011 from  http://homes.cerias.purdue.edu/~bhargav/cs526/security-9.pdf

    Read the article

  • Renault under threat from industrial espionage, intellectual property the target

    - by Simon Thorpe
    Last year we saw news of both General Motors and Ford losing a significant amount of valuable information to competitors overseas. Within weeks of the turn of 2011 we see the European car manufacturer, Renault, also suffering. In a recent news report, French Industry Minister Eric Besson warned the country was facing "economic war" and referenced a serious case of espionage which concerns information pertaining to the development of electric cars. Renault senior vice president Christian Husson told the AFP news agency that the people concerned were in a "particularly strategic position" in the company. An investigation had uncovered a "body of evidence which shows that the actions of these three colleagues were contrary to the ethics of Renault and knowingly and deliberately placed at risk the company's assets", Mr Husson said. A source told Reuters on Wednesday the company is worried its flagship electric vehicle program, in which Renault with its partner Nissan is investing 4 billion euros ($5.3 billion), might be threatened. This casts a shadow over the estimated losses of Ford ($50 million) and General Motors ($40 million). One executive in the corporate intelligence-gathering industry, who spoke on condition of anonymity, said: "It's really difficult to say it's a case of corporate espionage ... It can be carelessness." He cited a hypothetical example of an enthusiastic employee giving away too much information about his job on an online forum. While information has always been passed and leaked, inadvertently or on purpose, the rise of the Internet and social media means corporate spies or careless employees are now more likely to be found out, he added. We are seeing more and more examples of where companies like these need to invest in technologies such as Oracle IRM to ensure such important information can be kept under control. It isn't just the recent release of information into the public domain via the Wikileaks website that is of concern, but also the increasing threats of industrial espionage in cases such as these. Information rights management doesn't totally remove the threat, but abilities to control documents no matter where they exist certainly increases the capabilities significantly. Every single time someone opens a sealed document the IRM system audits the activity. This makes identifying a potential source for a leak much easier when you have an absolute record of every person who's had access to the documents. Oracle IRM can also help with accidental or careless loss. Often people use very sensitive information all the time and forget the importance of handling it correctly. With the ability to protect the information from screen shots and prevent people copy and pasting document information into social networks and other, unsecured documents, Oracle IRM brings a totally new level of information security that would have a significant impact on reducing the risk these organizations face of losing their most valuable information.

    Read the article

  • Gnome Do does not autostart and save shortcuts

    - by Matt
    For some reason the autostart of Gnome-Do will not work in 11.10. I've installed Gnome-Do via the Ubuntu Software Center. Then I changed the shortcut to launch Gnome-Do and marked the option to autostart Gnome-Do within Gnome-Do. In order to verify the autostart, I checked whether it's also found in the autostart applications (which it was). However, upon every restart I have to start Gnome-Do manually via the unity launcher and change the shortcut again.

    Read the article

  • Daily tech links for .net and related technologies - Apr 1-3, 2010

    - by SanjeevAgarwal
    Daily tech links for .net and related technologies - Apr 1-3, 2010 Web Development Cleaner HTML Markup with ASP.NET 4 Web Forms - Client IDs - ScottGu Using jQuery and OData to Insert a Database Record - Stephen Walter Apple vs. Microsoft – A Website Usability Study Mastering ASP.NET MVC 2.0: Preview - TekPub Web Design UX Lessons Learned From Offline Experiences - Jon Phillips 5 Steps Toward jQuery Mastery - Dave Ward 20 jQuery Cheatsheets, Docs and References for Every Occasion - Paul Andrew 11...(read more)

    Read the article

  • Per-Thread Visibility PHPBB

    - by Andrei Krotkov
    I'm trying to implement a registration system for a board I'm running, and I want a forum where every thread is invisible to everyone but the person who started it and the moderator staff. I want the staff to be able to post and for the person registering to respond, but I haven't been able to find a per-post visibility solution. Are there any mods that perform this task, or is there a hidden setting in the software somewhere?

    Read the article

  • How to sync the actions in a mutiplayer game?

    - by Wheeler
    I connect the clients with UDP (its a peer to peer connection on a multicast network) and the clients are sending their positions in every frame (in WP7 it means the default 30 FPS) to each other. This game is kinda a pong game, and my problem is the next: whenever the opponent hits the ball the angle will not be the same on both mobiles. I think its because the latency (1 pixel difference can cause a different angle). So my question is: how can I sync the hitting event?

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >