Search Results

Search found 28707 results on 1149 pages for 'writing your own'.

Page 639/1149 | < Previous Page | 635 636 637 638 639 640 641 642 643 644 645 646  | Next Page >

  • How to configure chrome to open magnet url's with deluge?

    - by michael_n
    After upgrading to Ubuntu 11.04 (natty) from 10.10, I can no longer open magnet (torrent) links in Chromium, and set deluge to automatically open and accept the url. (Edit: currently ".torrent" files are not a problem, but magnet url's, e.g. of the form "magnet:?xt=urn:...", are now the only problem. Not sure if something updated...?) Rather, now only transmission will automatically open torrents, magnet links, etc. There doesn't seem to be a way to set deluge to be the default torrent client. (And, there also doesn't seem to be a "default application" setting for bittorrent client to replace transmission w/ deluge.) Notes: I found some old threads on this issue, and only a one or two newer ones. The newer threads seem to suggest xdg-open is to blame. But not many people seem to be running into this problem, so... maybe it's just me? Not using firefox, so manually setting apps for mime-types or extensions doesn't work (that's not an option in chrome/chromium, afaik -- you have to rely on the OS) I uninstalled transmission, and then basically nothing happened when clicking on torrent/magnet links. running from the shell also opens transmission (not deluge): xdg-open "magnet:?xt=urn:bt..&tr=http://tracker.....com/announce" My current url handlers are: $ gconftool -a /desktop/gnome/url-handlers/magnet command = deluge "%s" needs_terminal = false enabled = true The only work-around I have (which does work) is to rename /usr/bin/transmission-gtk{,.bak} and create my own /usr/bin/transmission-gtk : $ cat /usr/bin/transmission-gtk #!/bin/bash deluge "$@" Anyone else run into this, know of a bug, workaround, or...?

    Read the article

  • choosing the right SSL certifcate

    - by seengee
    Hi All, We're looking to purchase some SSL certificates to secure the login pages of ecommerce sites. It is not required to secure the actual payment process as this is protected by a third party with its own verisign certificate. rapidSSL looks like a good (and cheap) option but a salesperson has told me that they are only suitable for "test sites" and recommended that we use one that is 4 times the cost. Can anyone make any recommendations about what we should be looking for and what we should consider? Thanks.

    Read the article

  • Adding multiple Exchange accounts under one profile in Outlook 2010 or 2013

    - by Karel Smutný
    I work for two companies, each having own Exchange Server. I want to configure my Outlook for both email accounts. I am not able to add both accounts under the same profile in Outlook 2010, nor 2013. I found some how-to articles, each involving Office Configuration Tool. However, my installation does not support this tool. I have Office 2010 Home and Business, and Office 2013 Preview click-to-run. Is there another way? P.S. Having two different profiles, one for each Exchange account, is inconvenient. I cannot run two instances of Outlook at the same time and switching all the time is tedious.

    Read the article

  • You Don't Want to Meet Orgad Kimchi in a Dark Alley

    - by rickramsey
    source Do you remember what those bad guys in the old Charles Bronson films looked like? They looked like Orgad Kimchi, that's what they looked like. When I met him at Oracle OpenWorld 2012, I realized I didn't want to meet him in the wrong alleyway of Budapest after dark. Neither do old versions of Oracle Solaris, which Orgad bends to his will with as much ease as he probably bends stray tourists to his will in Budapest, Kandahar, or Dagestan. How Orgad Made Oracle Database Migrate from Oracle Solaris 8 to Oracle Solaris 11 In this article, which we liked so much we reprinted it from his blog (please don't tell him!), Orgad explains how he head-butted an Oracle Database into submission. The database thought it was safe running in Oracle Solaris 8, but Orgad dragged its whimpering carcas into Oracle Solaris 11. How'd he do that? Well, if you had met Orgad in person, you wouldn't ask that question. Because you'd know he could have simply stared at it, and the database would have migrated on its own. But Orgad didn't do that. Instead, he stuffed an Oracle Solaris 8 Physical-to-Virtual (P2V) Archiver Tool into his leather trench coat, the one with the special pockets sown in by the East German Secret Police for several Uzis and their ammo, and walked into his data center in a way that reminded the survivors of this clip from Matrix Reloaded. The end result? The Oracle Database 10.2 that was running on Oracle Solaris 8 is now running inside a Solaris 10 branded zone in Oracle Solaris 11. With no complaints. Don't make Orgad angry. Read his article. - Rick Website Newsletter Facebook Twitter

    Read the article

  • How do I get the last value of a column in an Excel spreadsheet?

    - by Chris
    In column A, I have dates. In column B, I have my body weight logged for the day. I add one row to each every day when I weigh myself, so this means the data is sorted by date ascending. The weights, of course, fluctuate (though it would be nice if they would go down every day for my own personal benefit). For a couple of calculations, I want to get the latest (or last) weight entered in column B. Not the max or the min, but the last one entered in the column. I want it to work no many how many rows I enter. I use Excel 2007, if that ends up mattering.

    Read the article

  • Wordnik Accelerator

    - by prabhpreet
    Wow, creating IE Accelerators is superbly easy. If you want to learn how to create one, go here (some MSDN blog) and the MSDN documentation (clearly written). I was fed up of dictionary.com bringing all those popups and the stupid definitions of Google's dictionary. So I decided to scratch my own itch. I randomly stumbled on the site called Wordnik and it provides with all examples plus definitions plus lots more for words and its popup-free (as far as I know). So I decided to write and accelerator. Here is the source code (Yes, this is it): <?xml version="1.0" encoding="utf-8"?> <os:openServiceDescription xmlns:os="http://www.microsoft.com/schemas/openservicedescription/1.0"> <os:homepageUrl>http://www.wordnik.com</os:homepageUrl> <os:display> <os:name>View on Wordnik</os:name> <os:description>Looking up words on an awesome word site called Wordnik </os:description> <os:icon>http://www.wordnik.com/favicon.ico</os:icon> </os:display> <os:activity category="Define"> <os:activityAction context="selection"> <os:execute method="get" action="http://www.wordnik.com/words/{selection}" ></os:execute> </os:activityAction> </os:activity> </os:openServiceDescription> That’s it. To get it, go here. Enjoy!

    Read the article

  • SSO between multiple Flex applications

    - by KarthiPk
    We have three applications developed in Flex and all these use BlazeDS. These applications have their own authentication implementations (Database). Also they will be deployed in tomcat. Deploying all these applications in the same tomcat instance is acceptable for us. We want to bring the authentication credentials of all these applications into a single place and also provide SSO feature between these applications. We also want the authentication module to be configurable. Something like the system administrator can decide if the authentication should be done against a database or LDAP. Say, if the user successfully logs into app1, and when he access app2 in the same browser he should be automatically logged in. Same goes for logout as well. We have been exploring OpenAM, jGuard and JOSSO. I'm not sure if these require lot of customization to work with Flex. I would like to know how people are implementing SSO for Flex applications. Is there a common and simple SSO solution available for Flex based applications ?

    Read the article

  • Resolving real IP address out of a dynamic dns address

    - by stavnir
    I recently opened a dynamic dns user (at no-ip for that matter..) for my own personal needs and especially for ssh-ing my computer whenever I need to, without knowing it's static IP. My questions are: Am I misusing the concept of dynamic dns? Are there more appropriate methods to do what I want to do? If not, how do I resolve my router's real ip address? Firefox somehow manages to do so, nslookup and other similar commands only resolve the ip of the ddns server (e.g. no-ip.org). Trying to figure this mystery with wireshark failed miserably ;)

    Read the article

  • What I saw at TechEd North America 2014

    - by Brian Schroer
    Originally posted on: http://geekswithblogs.net/brians/archive/2014/05/19/teched-north-america-2014.aspxI was thrilled to be able to attend TechEd North America 2014 in Houston last week. I got to go to Orlando in 2008, and since then I’ve had to settle for watching the sessions online (which ain’t bad – They’re all available on Channel 9 for streaming or downloading. Here are links to the Developer Track sessions and to the sessions from all tracks.) The sessions I attended (with my favorites bolded) were: Shiny new stuff The Microsoft Application Platform for Developers: Create Applications That Span Devices and Services INTRODUCING: The Future of .NET on the Server DEEP DIVE: The Future of .NET on the Server ASP.NET: Building Web Application Using ASP.NET and Visual Studio The Next Generation of .NET for Building Applications The Future of Visual Basic and C# Stuff you can use now Building Rich Apps with AngularJS on ASP.NET Get the Most Out of Your Code Maps SignalR: Building Real-Time Applications with ASP.NET SignalR Performance Optimize Your ASP.NET Web App Modern Web and Visual Studio Visual Studio Power User: Tips and Tricks Debugging Tips and Tricks in Visual Studio 2013 In a world where the whole company uses TFS… Using Functional, Exploratory and Acceptance Testing to Release with Confidence A Practical View of Release Management for Visual Studio 2013 From Vanity to Value, Metrics That Matter: Improving Lean and Agile, Kanban, and Scrum Ain’t Nobody Got Time for That As usual, there were some time slots with nothing of interest and others with 5 things I wanted to see at the same time. Here are the sessions I’m still planning to watch… Getting Started with TypeScript Building a Large Scale JavaScript Application in TypeScript Modern Application Lifecycle Management Why a Hacker Can Own Your Web Servers in a Day! Async Best Practices for C# and Visual Basic Building Multi-Device Apps with the New Visual Studio Tooling for Apache Cordova Applying S.O.L.I.D. Principles in .NET/C# Native Mobile Application Development for iOS, Android, and Windows in C# and Visual Studio Using Xamarin Latest Innovations in Developing ASP.NET MVC Web Applications Zero to Hero: Untested to Tested with Microsoft Fakes Using Visual Studio Cool and Elegant ASP.NET Web Forms with HTML 5 for the Modern Web The Present and Future of .NET in a World of Devices and Services

    Read the article

  • Connection two wireless ADSL routers to share IPs

    - by user35218
    I have two wireless ADSL routers sitting right next to each other, each with his own internet connection. I'd like to be able to connect to a computer that is connected to router A from a computer that is connected to router B, while keeping both routers internet connection individually. i.e. If computer A is connected to router A, it will use router A internet connection, and a second computer, call it B, will be connected to router B, and will use router B internet connection. Is this possible?

    Read the article

  • "Programming error" exceptions - Is my approach sound?

    - by Medo42
    I am currently trying to improve my use of exceptions, and found the important distinction between exceptions that signify programming errors (e.g. someone passed null as argument, or called a method on an object after it was disposed) and those that signify a failure in the operation that is not the caller's fault (e.g. an I/O exception). As far as I understand, it makes little sense for an immediate caller to actually handle programming error exceptions, he should instead assure that the preconditions are met. Only "outer" exception handlers at task boundaries should catch them, so they can keep the system running if a task fails. In order to ensure that client code can cleanly catch "failure" exceptions without catching error exceptions by mistake, I create my own exception classes for all failure exceptions now, and document them in the methods that throw them. I would make them checked exceptions in Java. Now I have a few questions: Before, I tried to document all exceptions that a method could throw, but that sometimes creates an unwiedly list that needs to be documented in every method up the call chain until you can show that the error won't happen. Instead, I document the preconditions in the summary / parameter descriptions and don't even mention what happens if they are not met. The idea is that people should not try to catch these exceptions explicitly anyway, so there is no need to document their types. Would you agree that this is enough? Going further, do you think all preconditions even need to be documented for every method? For example, calling methods in IDisposable objects after calling Dispose is an error, but since IDisposable is such a widely used interface, can I just assume a programmer will know this? A similar case is with reference type parameters where passing null makes no conceivable sense: Should I document "non-null" anyway? IMO, documentation should only cover things that are not obvious, but I am not sure where "obvious" ends.

    Read the article

  • Handling extremely large numbers in a language which can't?

    - by Mallow
    I'm trying to think about how I would go about doing calculations on extremely large numbers (to infinitum - intergers no floats) if the language construct is incapable of handling numbers larger than a certain value. I am sure I am not the first nor the last to ask this question but the search terms I am using aren't giving me an algorithm to handle those situations. Rather most suggestions offer a language change or variable change, or talk about things that seem irrelevant to my search. So I need a little guideance. I would sketch out an algorithm like this: Determine the max length of the integer variable for the language. If a number is more than half the length of the max length of the variable split it in an array. (give a little play room) Array order [0] = the numbers most to the right [n-max] = numbers most to the left Ex. Num: 29392023 Array[0]:23, Array[1]: 20, array[2]: 39, array[3]:29 Since I established half the length of the variable as the mark off point I can then calculate the ones, tenths, hundredths, etc. Place via the halfway mark so that if a variable max length was 10 digits from 0 to 9999999999 then I know that by halfing that to five digits give me some play room. So if I add or multiply I can have a variable checker function that see that the sixth digit (from the right) of array[0] is the same place as the first digit (from the right) of array[1]. Dividing and subtracting have their own issues which I haven't thought about yet. I would like to know about the best implementations of supporting larger numbers than the program can.

    Read the article

  • Is it possible to add files to the "Wordpress Media Library" using the command line?

    - by Tom
    Wordpress has it's own "Media Library" which is used when you upload images and other media for use in blog posts and pages. The advantage of the media library is that it automatically produces thumbnails of the images and the web interface gives you extra info such as who uploaded the image, which articles use the image, etc. My question is, does anyone have any tips on interacting with the media library via the command line instead of using the Wordpress web interface? For example, any ideas on how to add a image to the media library from the command line? If I copy files to the media library directory (usually .../wp-content/uploads/YYYY/MM/) from the command line they do not show up in the Wordpress dashboard - I guess because there needs to be an associated database entry for the media to be registered with Wordpress.

    Read the article

  • CPU load, USB connection vs. NIC

    - by T.J. Crowder
    In general, and understanding the answer may vary by manufacturer and model (and driver, and...), in consumer-grade workstations with integrated NICs, does the NIC rely on the CPU for a lot of help (as is typically the case with a USB controller, for instance), or is it fairly intelligent and capable on its own (like, say, the typical Firewire controller)? Or is the question too general to answer? (If it matters, you can assume Linux.) Background: I'm looking at connecting a device (digital television capture) that will be delivering ~20-50 Mbit/sec of data to a somewhat under-powered workstation. I can get a USB 2 High-speed device, or a network-attached device, and am interested in avoiding impacting the CPU where possible. Obviously, if it's a 100Mbit NIC, that's roughly half its theoretical inbound bandwidth, whereas it's only roughly a tenth of the 480 Mbit/second the USB 2 "High Speed" interface. But if the latter requires a lot of CPU support and the former doesn't...

    Read the article

  • WebAPI and MVC4 and OData

    - by Aligned
    I was looking closer into WebAPI, specificially how to use OData to avoid writing GetCustomerByCustomerId(int id) methods all over the place. I had problems just returning IQueryable<T> as some sites suggested in the WebpAPI (Assembly System.Web.Http.dll, v4.0.0.0).  I think things changed in the release version and the blog posts are still out of date. There is no [Queraable] as the answer to this question suggests. Once I get WebAPI.Odata Nuget package, and added the [Queryable] to the method http://localhost:57146/api/values/?$filter=Id%20eq%201 worked (don’t forget the ‘$’). Now the main question is whether I should do this and how to stop logged in users from sniffing the url and getting data for other users. I John V. Peterson has a post on securing WebAPI with headers and intercepting the call at that point. He had an update to use HttpMessageHandlers instead. I think I’ll use this to force the call to contain some kind of unique code for the user, but I’m still thinking about this. I will not expose this to the public, just to my calls with-in my Forms Authentication areas. Other links: http://robbincremers.me/2012/02/16/building-and-consuming-rest-services-with-asp-net-web-api-and-odata-support/ ~lots of good information John V Peterson example: https://github.com/johnvpetersen/ASPWebAPIExample ~ all data access goes through the WebApi and the web client doesn’t have a connection string ~ There is code library for calling the WebApi from MVC using the HttpClient. It’s a great starting point http://blogs.msdn.com/b/alexj/archive/2012/08/15/odata-support-in-asp-net-web-api.aspx ~ Beta (9/18/2012) Nuget package to help with what I want to do? ~ has a sample code project with examples http://blogs.msdn.com/b/alexj/archive/2012/08/15/odata-support-in-asp-net-web-api.aspx http://blogs.msdn.com/b/alexj/archive/2012/08/21/web-api-queryable-current-support-and-tentative-roadmap.aspx http://stackoverflow.com/questions/10885868/asp-net-mvc4-rc-web-api-odata-filter-not-working-with-iqueryable JSON, pass the correct format in the header (Accept: application/json). $format=JSON doesn’t appear to be working. Async methods built into WebApi! Look for the GetAsync methods.

    Read the article

  • Calling functions from different classes

    - by A Ron Hubbard Clevenger
    I'm writing a program and I'm supposed to check and see if a certain object is in the list before I call it. I set up the contains() method which is supposed to use the equals() method of the Comparable interface I implemented on my Golfer class but it doesn't seem to call it (I put print statements in to check). I can't seem to figure out whats wrong with the code, the ArrayUnsortedList class I'm using to go through the list even uses the correct toString() method I defined in my Golfer class but for some reason it won't use the equals() method I implemented. //From "GolfApp.java" public class GolfApp{ ListInterface <Golfer>golfers = new ArraySortedList<Golfer> (20); Golfer golfer; //..*snip*.. if(this.golfers.contains(new Golfer(name,score))) System.out.println("The list already contains this golfer"); else{ this.golfers.add(this.golfer = new Golfer(name,score)); System.out.println("This golfer is already on the list"); } //From "ArrayUnsortedList.java" protected void find(T target){ location = 0; found = false; while (location < numElements){ if (list[location].equals(target)) //Where I think the problem is { found = true; return; } else location++; } } public boolean contains(T element){ find(element); return found; } //From "Golfer.java" public class Golfer implements Comparable<Golfer>{ //..irrelavant code sniped..// public boolean equals(Golfer golfer) { String thisString = score + ":" + name; String otherString = golfer.getScore() + ":" + golfer.getName() ; System.out.println("Golfer.equals() has bee called"); return thisString.equalsIgnoreCase(otherString); } public String toString() { return (score + ":" + name); } My main problem seems to be getting the find function of the ArrayUnsortedList to call my equals function in the find() part of the List but I'm not exactly sure why, like I said when I have it printed out it works with the toString() method I implemented perfectly. I'm almost positive the problem has to do with the find() function in the ArraySortedList not calling my equals() method. I tried using some other functions that relied on the find() method and got the same results.

    Read the article

  • Development teams do not scale

    - by Matt Watson
    Recently I have been thinking about how development teams don't scale very well. The bigger a team and the product get, the more time the team spends fixing software bugs. This means they spend more time doing troubleshooting and debugging as the grow. The problem is that since developers don't typically have access to production servers, there is a bottleneck in the process when doing production troubleshooting.For a team that has 10 developers, I would guess than 0-2 of them have access to production servers. If that team grows to 20 people, it is probably the same 0-2 people that have production access still. This means that those 2 key people are a bottleneck and the team does not scale correctly as you add more resources. All those new developers want is to help track down and fix software bugs, but they don't have the visibility to do it. So they end up being less productive and frustrated because they really want to fix the problems. The people who do have production access end up spending too much of their time doing troubleshooting instead of working on new projects.The solution is to remove the bottlenecks and get those people working on more important tasks. Stackify can solve this problem by giving all the developers read only access to production servers. This allows them to access the information they need to do troubleshooting on their own.

    Read the article

  • How do I download photos tagged of me from Facebook?

    - by Keith
    I want to be able to download (and back up) photos tagged of me in Facebook. I'm specifically not interested in my own photo albums - I uploaded them and therefore have them already in better quality than FB. What I want are the photos other have uploaded that have me in them. I have a couple of hundred of these now, and don't much fancy three hours of right-click save-as... There seem to be a couple of utilities that pop up with a quick search, but (call me paranoid) I'm wary about giving some random freeware app my login and password. Social safe looked promising, but as it doesn't support this feature at the moment it's kinda pointless. Can anyone recommend one that they've actually used? I'd consider an open source one - I'm a programmer and don't mind digging through to check that it doesn't do anything nasty.

    Read the article

  • Having to check collisions twice per game tic

    - by user22241
    I have vertically moving elevators (3 solid tiles wide) and static solid tiles. Each are separate entities and therefore have their own respective collision routines (to check for, and resolve, collisions with the main character) I check my vertical collisions after characters vertical movements and then horizontal collisions after horizontal movements. The problem is that I want my platform to kill the player if it squashes him from the top, and also if he's on a moving platform (that is moving up) that squashes him into a solid block. Correct behaviour, player on solid blocks being squashed from above by decending elevator Here is what happens. Gravity pushes character into solid block, solid block collision routine corrects characters position and sits him on the solid block which pushes him into the moving elevator, elevator routine then checks for collision and kills player. This assumes I am checking solid blocks first, then elevator collisions. However, if it's the other way around, this happens.... Incorrect behaviour, player on accending elevator gets pushed into solid blocks above Player is on an elevator moving up, gravity pushes him into the elevator, solid block CD routine detects no collision, no action taken. Elevator CD routine detects character has been pushed into elevator by gravity, corrects this by moving character up and sitting him on the elevator and pushes him into the solid blocks above, however the solid block vertical routine has now already run for this tic, so the game continues and the next solid block collision that is encountered is the horizontal routine. This detects a collision and moves the character out of the collision to the left or right of the block which looks odd to say the least (character should get killed here). The only way I've managed to get this working correctly is by running the solid block CD, then the elevator CD, then the solid block CD again straight after. This is clearly wasteful but I can't figure out how else to do this. Any help would be appreciated.

    Read the article

  • Dependency injection: what belongs in the constructor?

    - by Adam Backstrom
    I'm evaluating my current PHP practices in an effort to write more testable code. Generally speaking, I'm fishing for opinions on what types of actions belong in the constructor. Should I limit things to dependency injection? If I do have some data to populate, should that happen via a factory rather than as constructor arguments? (Here, I'm thinking about my User class that takes a user ID and populates user data from the database during construction, which obviously needs to change in some way.) I've heard it said that "initialization" methods are bad, but I'm sure that depends on what exactly is being done during initialization. At the risk of getting too specific, I'll also piggyback a more detailed example onto my question. For a previous project, I built a FormField class (which handled field value setting, validation, and output as HTML) and a Model class to contain these fields and do a bit of magic to ease working with fields. FormField had some prebuilt subclasses, e.g. FormText (<input type="text">) and FormSelect (<select>). Model would be subclassed so that a specific implementation (say, a Widget) had its own fields, such as a name and date of manufacture: class Widget extends Model { public function __construct( $data = null ) { $this->name = new FormField('length=20&label=Name:'); $this->manufactured = new FormDate; parent::__construct( $data ); // set above fields using incoming array } } Now, this does violate some rules that I have read, such as "avoid new in the constructor," but to my eyes this does not seem untestable. These are properties of the object, not some black box data generator reading from an external source. Unit tests would progressively build up to any test of Widget-specific functionality, so I could be confident that the underlying FormFields were working correctly during the Widget test. In theory I could provide the Model with a FieldFactory() which could supply custom field objects, but I don't believe I would gain anything from this approach. Is this a poor assumption?

    Read the article

  • Does Exchange 2010 lift the restriction that DL addresses must be in Active Directory?

    - by Justin Grant
    We'd like to enable end-users to be able to create and maintain their own email distribution lists in Exchange 2010, where those lists may include users inside the company but also customers, partners, etc. who are outside the company. One of the limitations in Exchange 2007 (see this question) was that any member of a DL had to have an entry in active directory. You couldn't just take a group of email addresses (both inside and outside my company) and create an Exchange DL with those addresses without involving Active Directory admins to create entries for each external user. For a company creating hundreds of small mailing lists every month, this was an unacceptable IT expense. So we had to use a separate mailing list solution (GNU mailman) for DLs which included external users. Is this limitation relaxed in Exchange 2010 so we can throw away GNU mailman and use Exchange instead?

    Read the article

  • How can I get vim to set an ACL on its swap files?

    - by thsutton
    I use vim on an OS X Snow Leopard Server machine. A number of the directories I work in have ACLs (so that various groups of users can access them over AFP) that are inherited. For some reason, when I'm working in one of these directories, vim cannot read it's own swap files. It can create them fine but can't read them which, for some reason, makes it display the "swap file already exists" message (and no, the swap file does not already exist). vim -r lists the newly created swap file as "[cannot be read]". The owner and group are correct and the permissions are 0600, and the ACLs on the swap file and the file I'm editing are identical (as disclosed by ls -le and compared with diff). groups returns the same thing whether invoked from my login shell or via :! in vim. Has anyone encountered (and hopefully resolved) a problem like this before?

    Read the article

  • Entity Component System for HUD and GUI

    - by Jason L.
    This is a very rough sketch of how I currently have things designed. It should, at least, give an idea of how my ECS is currently designed. If you notice in that diagram, I have basically split the HUD out of the ECS. They have their own set of things (HudLayer, HudComponent, etc) and are handled differently. This is where I'm struggling, though. There are many different instances in which the HUD will need to know about entities. Not just data changing (I have an event dispatcher for that), but the actual entity and all it encompasses. There are also situations where entities will need to be able to query the HUD for data. Let's take a couple examples: First, my equipment screen. On here I can change the equipment on a character (Entity). In order for this to happen, I need to know about the entity. At least I think I do? How can I handle this? The second scenario involves my Systems needing to query a HudComponent for data. A specific example would be my battle system. Each "team" is given a 3x3 grid they can move around in. See here: Skills target these cells, and not the player, so I would need a way for my systems to determine which cells are occupied and which are not. Basically I need a way for two way communication between Systems and my HUD. I know it's recommended (by some people, anyways) to take your HUD out of the ECS. Is that appropriate in my case?

    Read the article

  • Recovering files that do not appear in the Recycle Bin, but are in the $Recycle.bin folder on external drive

    - by Zach Morgan
    Problem: I have an NTFS external drive with a $Recycle.bin folder on the root (E:/$Recycle.bin/) that has about 70gb worth of data. For whatever reason, the folder is no longer a hidden system file and no Windows machine I have used the drive on will show the files in the actual Recycle Bin. What I Want To Do: I want to atleast view the recycle bin files from this external, and all of the help articles I have read just talk about deleting the folder all together. I plan on reformating the drive, but first I need to see if there are any important deleted files. What Didn't Work: Recuva - didn't see any of my files Resetting the external's Recycle Bin via command prompt and moving the old $Recycle.bin files into the new external $Recycle.bin folder (I didn't read this anywhere, just made it up on my own)

    Read the article

  • What is Cloud Computing?

    - by joelvarty
    This is a question that we discuss quite often at Edentity.  It’s one of those things, kind of like “web services” where the terminology has been thrown around by a ton of people and means a lot of different things. Here’s my favorite diagram so far, which is a visual breakdown of the material presented here by NIST, visualized by the folks at Cloud Security Alliance.     What I like about this diagram is that is shows several different ways that we can differentiate our definitions of cloud computing, from the essential characteristics, or which “Broad Network Access" and “On-Demand Self-Service” (which often are used on their own to define cloud computing) are but a couple of things that help make something “cloud”. The most important section from my point of view is the middle one – the Service Models.  This represents the different ways that cloud computing can be exposed from the ground up.  It can be an Infrastructure, a Platform or a piece of Software that an end user interacts with. This is the future, folks. more late - joel

    Read the article

< Previous Page | 635 636 637 638 639 640 641 642 643 644 645 646  | Next Page >