Search Results

Search found 12824 results on 513 pages for 'glen little'.

Page 226/513 | < Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >

  • Last Chance At Space

    - by Grant Fritchey
    All entries for the DBA In Space contest have to be in by this Friday, the 18th. I’m so jealous of all of you who can enter this contest. Just think about it. You’re getting a chance to take a sub-orbital rocket ride. But, here’s the kicker, the chances are limited to data professionals. That’s a pretty small sub-set when you think about it. Further, you have to gotten the answers to the quiz questions correct, which only takes a little bit of honest research, but come on. That further limits the result set. You’ve really got an excellent shot at this (and the jealousy rears it’s ugly head again). If you haven’t finished your entry, go on over to the link and get it taken care of. There’s really no reason to not do it. Oh, and by the way, if you’re one of those (I’d say crazy) people who don’t want to ride the rocket, you can take the prize in cash. Although I’d be mighty disappointed in you if you did.

    Read the article

  • What do you wish language designers paid attention to?

    - by Berin Loritsch
    The purpose of this question is not to assemble a laundry list of programming language features that you can't live without, or wish was in your main language of choice. The purpose of this question is to bring to light corners of languge design most language designers might not think about. So, instead of thinking about language feature X, think a little more philisophically. One of my biases, and perhaps it might be controversial, is that the softer side of engineering--the whys and what fors--are many times more important than the more concrete side. For example, Ruby was designed with a stated goal of improving developer happiness. While your opinions may be mixed on whether it delivered or not, the fact that was a goal means that some of the choices in language design were influenced by that philosophy. Please do not post: Syntax flame wars (I could care less whether you use whitespace [Python], keywords [Ruby], or curly braces [Java, C/C++, et. al.] to denote program blocks). That's just an implementation detail. "Any language that doesn't have feature X doesn't deserve to exist" type comments. There is at least one reason for all programming languages to exist--good or bad. Please do post: Philisophical ideas that language designers seem to miss. Technical concepts that seem to be poorly implemented more often than not. Please do provide an example of the pain it causes and if you have any ideas of how you would prefer it to function. Things you wish were in the platform's common library but seldom are. One the same token, things that usually are in a common library that you wish were not. Conceptual features such as built in test/assertion/contract/error handling support that you wish all programming languages would implement properly--and define properly. My hope is that this will be a fun and stimulating topic.

    Read the article

  • Object oriented wrapper around a dll

    - by Tom Davies
    So, I'm writing a C# managed wrapper around a native dll. The dll contains several hundred functions. In most cases, the first argument to each function is an opaque handle to a type internal to the dll. So, an obvious starting point for defining some classes in the wrapper would be to define classes corresponding to each of these opaque types, with each instance holding and managing the opaque handle (passed to its constructor) Things are a little awkward when dealing with callbacks from the dll. Naturally, the callback handlers in my wrapper have to be static, but the callbacks arguments invariable contain an opaque handle. In order to get from the static callback back to an object instance, I've created a static dictionary in each class, associating handles with class instances. In the constructor of each class, an entry is put into the dictionary, and this entry is then removed in the Destructors. When I receive a callback, I can then consult the dictionary to retrieve the class instance corresponding to the opaque reference. Are there any obvious flaws to this? Something that seems to be a problem is that the existence static dictionary means that the garbage collector will not act on my class instances that are otherwise unreachable. As they are never garbage collected, they never get removed from the dictionary, so the dictionary grows. It seems I might have to manually dispose of my objects, which is something absolutely would like to avoid. Can anyone suggest a good design that allows me to avoid having to do this?

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • Why is purchasing Microsoft licences such a daunting task? [closed]

    - by John Nevermore
    I've spent 2 frustrating days jumping through hoops and browsing through different local e-shops for VS (Visual Studio) 2010 Pro. And WHS (Windows Home Server) FPP 2011 licenses. I found jack .. - or to be more precise, the closest I found in my country was WHS OEM 2011 licenses after multiple emails sent to individuals found on Microsoft partners page. Question being, why is it so difficult to get your hands on Microsoft licenses as an individual? Sure, you can get the latest end user operating systems from most shops, but when it comes to development tools or server software you are left dry. And companies that do sell licenses most of the time don't even put up pricing or a self service environment for buying the licenses, you need to have an hawk's eye for that shiny little Microsoft partner logo and spam through bunch of emails not knowing, if you can count on them to get the license or not. Sure, i could whip out my credit card and buy the VS 2010 license on the online Microsoft Shop. Well whippideegoddamndoo, they sell that, but they don't sell WHS 11 licenses. Why does a company make it so hard to buy their products? Let's not even talk about the licensing itself being a pain.

    Read the article

  • I need some career/education advice regarding computer science [on hold]

    - by user2521987
    So I'm a senior mathematics major this fall and I have only taken three CS classes (Java I, Java II, and C++). This summer, I am participating in a mathematics REU (Research Experience for Undergraduates), and I program in C++ about 8 hours a day...and I find that I absolutely love it. I love using programming to solve math problems in my research. I think I want to pursue a career in programming. I have a few options Stay at my university an extra 1-1.5 years (beyond the 4) and do a double major in Math/CS. This will put me in up to around 7-10k in debt (currently I have no debt and am scheduled to graduate debt free). Then apply to a masters in CS. Apply directly to a masters in CS from a math undergraduate degree. I don't like this idea because I likely won't get into a good program or funded with such little background. Go to graduate school, funded, in applied mathematics and try to further my knowledge in computer science while there. Then apply to a masters in CS. I'm not sure if 1 or 3 would be better. My end goal would be to go to a top 20-30 CS graduate program and to get a cool, good job. What would you recommend?

    Read the article

  • Best scripting language for project [on hold]

    - by Dave
    This is a subjective question, but I don't know where else to ask it. I'd appreciate it if someone could direct me to an appropriate scripting language for my project. I'm a little new at this so I'd appreciate any help. The project is a website that will display a list of photo subject groups (such as "nature" "people" "sports" etc) on the home page. The photos will all be in subdirectories of the main photo directory (photos) and each subject group will represent a subdirectory in photos. For example in directory photos there might be 3 subdirectories, "nature" "people" "sports" and in each of those subdirectories there will be the actual photos. The idea is that when the website owner wants to update/add/delete a subject group all he has to do is add, delete or update a subdirectory of the photos directory. This means, I think, that I need a scripting language that can read the directories and files in the website and then send a web page with the information in it. What is the simplest and easiest scripting language to do this in? Any ideas? Thanks

    Read the article

  • Simple multi-seat

    - by Oli
    I've asked about multiseat before. The answer (for 10.04) involved doing it the proper way (eg through gdm, multiple server layouts). The problem was that gdm needs to be patched or reverted to 2.20 for multiseat. It's an ugly hack that, worse than anything, will hold up future updates. As a result, I didn't do anything. I still have a spare video card. I still have the monitor, keyboard and mouse all sitting waiting to jump into action. And I still want to be able to turn that into a simple desktop. My needs don't seem complicated. I have a second video card, a USB hub and anything connected to that USB hub that I want to be dedicated to another X server. I don't need a login screen (I'm happy hard-coding in a auto-login and I'd be happy with the user starting the X server if that's possible). This is so simple in my head that I only need two questions: How can I explicitly start an X server from the command line on an unused video adapter (by passing it whatever configuration I need to)? Can I have this new X session load a desktop environment on load? This seems like something you should be able to write in a little upstart script within 10 minutes. That would be perfect for me as then I'd have a nice start/stop control over the secondary desktop from the main desktop (that I want to leave unscathed!) I'm thinking something as simple as this for the payload: su -u other_user -c "startx -- localhost hardware-information" And use .xinitrc to load openbox or something...

    Read the article

  • Facebook App EULA & Restrictions: What can't they do that my web app can?

    - by Adam Tannon
    I have written a nifty little web app (in Java/GWT/JS) and have been experimenting with the idea of making it available through Facebook as a Facebook App as well. After spending some time reading Facebook's developer docs, it seems like I can just create a Facebook App to point at any URL I want and use that as the app/canvas. It accomplishes this via iframes. So, my tentative plan is to just point it towards my (existing) web app so that I don't have to totally re-write it. But then that got me thinking: Facebook must regulate what sorts of things can be done through a Facebook App, vs. what an app can't do. For instance, I can't imagine I can point a Facebook App to point at a URL for a web app that accepts e-commerce payments (that would by-pass Facebook altogether and not allow them to take a cut from the ecom transaction!). Also, I can't imagine that Facebook allows developers to point their Facebook Apps to just any old URL without some sort of a scan, otherwise that would open Facebook up to the horrors of every security threat knownst to humanity. I know for a fact that when you write an iOS native app and put it up on the Apple App Store, that Apple actually scans your source code for violations of their EULA. So my question: does Facebook do the same? If so, what are their terms & conditions for what a Facebook app can/can't do? Suprisingly, I can't find this anywhere!! Thanks in advance!

    Read the article

  • Algorithm to match timestamped events from two sources

    - by urza.cc
    I have two different physical devices (one is camera, one is other device) that observe the same scene and mark when a specified event occures. (record timestamp) So they each produce a serie of timestamps "when the event was observed". Theoretically the recorded timestamps should be very well aligned: Visualized ideal situation on two time lines "s" and "r" as recorded from the two devices: but more likely they will not be so nicely aligned and there might be missing events from timeline s or r: I am looking for algorithm to match events from "s" and "r" like this: So that the result will be something like: (s1,null); (s2,r1); (s3,null); (s4,r2); (s5,r3); (null,r4); (s6,r5); Or something similar. Maybe with some "confidence" rating. I have some ideas, but I feel that this might be probably a well known problem, that has some good known solutions, but I don't know the right terminology. I am a little bit out of my element here, this is not my primary area of programming.. Any helps, suggestions etc will be appreciated.

    Read the article

  • can canonical links be used to make 'duplicate' pages unique?

    - by merk
    We have a website that allows users to list items for sale. Think ebay - except we don't actually deal with selling the item, we just list it for sale and provide a way to contact the seller. Anyhow, in several cases sellers maybe have multiple units of an item for sale. We don't have a quantity field, so they upload each item as a separate listing (and using a quantity field is not an option). So we have a lot of pages which basically have the exact same info and only the item # might be different. The SEO guy we've started using has said we should put a canonical link on each page, and have the canonical link point to itself. So for example, www.mysite.com/something/ would have a canonical link of href="www.mysite.com/something/" This doesn't really seem kosher to me. I thought canonical links we're suppose to point to other pages. The SEO guy claims doing it this way will tell google all these pages are indeed unique, even if they do basically have the same content. This seems a little off to me since what's to stop a spammer from putting up a million pages and doing this as well? Can anyone tell me if the SEO guy's suggestion is valid or not? If it's not valid, then do i need to figure out some way to check for duplicated items and automatically pick one of the duplicates to serve as an original and generate canonical links based off that? Thanks in advance for any help

    Read the article

  • Usual Suspects: Typical 3rd Party Entities in E-Commerce [closed]

    - by zharvey
    I am doing some requirements/analysis for a web app that I'd like to build (Ruby/Java developer here). This web app would have a store front, shopping cart and would need to be totally compliant with all e-com best practices. It's amazing how much non-technical info comes up when you search for phrases like "how does e-commerce work", but very little comes up in the way of technical details. As such, I'm having extreme frustration finding answers to what I consider pretty straight-forward questions. I came here because I believe this question is not off-topic; if it is, please leave a comment as to why this question does not belong here and I will happily remove it myself (upvotes if your comment can point me to the correct place for this question!). So then: What 3rd parties will I need to work with to have a modern, web-compliant e-com site? So far I can account for a payment gateway provider like Authorize.net and an SSL certificate provider like Trustwave. Any others? What other standards besides PCI compliance will I be held to (besides governing laws, of course!)? Vulnerability scans: PCI compliance requires quarterly scans: if I'm a "Level 4" (low volume) Merchant does that still apply to me? Irregardless, my backend architecture is quite huge, with web servers, app servers, database, message brokers and more. Do each of these servers need to be scanned?!? If not what servers do need to get these quarterly scans? I usually hate to ask micro-questions inside of one large one, but these are so closely-related I just felt like asking them all separately would be spamming the site with too many petty questions. Thanks in advance!

    Read the article

  • Student Life in Nigeria

    - by FelixWehmeyer
    They say that the Nigerian way of life involves being able and ready to succeed in life no matter the obstacles. My name is Olawale Obasola, graduated from Covenant University ’07 and presently working as a graduate consultant in Oracle. I will give you an insight into the student life in Nigeria. Nowadays, being a graduate in Nigeria is not the easiest thing; the life after graduation is much harder than one would normally imagine. I guess it isn’t helped with the Economical uproar of the last few years but it’s as grueling as it can get.  Also the Upper vs Lower class phenomena makes the journey a little too much hurried as it is easier to lose one’s way. Joining Oracle is a dream come-true as it further enhances my stance of taking Technology to the core of Nigeria. My initial perception about joining Oracle is that am going to be monitored every single second and my life would practically revolve around work but ever since I stepped in here, it has been an amazing experience. As much as the work can be stressful at times, it’s also the best place I have ever worked in and the atmosphere is great. My team is the best Pre-Sales team in the region and I gain invaluable knowledge every minute, we have a bond and understanding that helps propel each other to achieve more. The Life of a Nigerian graduate isn’t a bed of roses but I have a strong will and personality to emerge from any hardship to succeed. We show the true strength and spirit of Africa, we never give up and would never settle for anything less than the best.

    Read the article

  • Problem with a* implementation in pygame

    - by piyush3dxyz
    Yesterday i decide to make RTS game in pygame(pygame is best).I figured out many components of RTS game like unit selecting,health,resources but only 1 thing i still not understand.. which is a* pathfinding in pygame... I also done little bit of research on wiki,articles and papers...but still cant figure out problem.... function A*(start,goal) closedset := the empty set // The set of nodes already evaluated. openset := {start} // The set of tentative nodes to be evaluated, initially containing the start node came_from := the empty map // The map of navigated nodes. g_score[start] := 0 // Cost from start along best known path. // Estimated total cost from start to goal through y. f_score[start] := g_score[start] + heuristic_cost_estimate(start, goal) while openset is not empty current := the node in openset having the lowest f_score[] value if current = goal return reconstruct_path(came_from, goal) remove current from openset add current to closedset for each neighbor in neighbor_nodes(current) if neighbor in closedset continue tentative_g_score := g_score[current] + dist_between(current,neighbor) if neighbor not in openset or tentative_g_score <= g_score[neighbor] came_from[neighbor] := current g_score[neighbor] := tentative_g_score f_score[neighbor] := g_score[neighbor] + heuristic_cost_estimate(neighbor, goal) if neighbor not in openset add neighbor to openset return failure here is the pseudocode for wiki a* implementation......

    Read the article

  • Visual Studio 2012 first impressions...no Macros!

    - by bconlon
    Yesterday I installed Microsoft Visual Studio 2012 for the first time (all 8.5GB) and after 20 years of (mostly) happy times using VS they have removed Macros, one of the most handy features.The first thing I wanted to do when I upgraded my VS2010 project was to add a #elseif block to each file. This would usually be simple case of find in files of the previous #elseif and then Ctrl+Shift+R to record a macro which would be: F8 (to select the next file from find list), F3 (to find the correct position in file), Ctrl+V to paste the new code. Then all I would need to do is keep Ctrl+Shift+P (Play Macro) pressed until all the files were processed.But alas Ctrl+Shift+R does nothing! I won’t say that I use Macros every day but it was a very useful feature.To continue my moaning a little more, I also don't like the bland interface. This has been well documented by others, but now I have used it myself, I find it difficult to tell one grey area of screen from another and the lack of colour makes the icons unclear.I also don't see why the menus now need to SHOUT in capital letters?On the plus side, they have now added the ability to see WPF properties in the debugger...a bit of an oversight in Visual Studio 2010. Oh, but you still can't edit and continue on files that contain templated code.Whilst Visual Studio 2012 is not a complete disaster like Windows 8 (why develop a desk top OS to be the same as a Smart device OS), it does not float my boat.Rant over.#

    Read the article

  • Trouble with speed and vectors

    - by Eegabooga
    I'm working on adding bullets to my game. Right now I can shoot bullets in the direction that I would like from a ship by getting the ship's angle: int speed = 5; int dx = -(cos(degreesToRadians(ship.angle)) * speed); // rate of change in the x direction int dy = -(sin(degreesToRadians(ship.angle)) * speed); // rate of change in the y direction bulletPosition.addX(dx); // addX(dx) is simply bulletPosition.x += dx bulletPosition.addY(dy); The ship is pretty much the exact same thing, except I use the += operator: int dx += -(cos(degreesToRadians(angle)) * 0.15) int dy += -(sin(degreesToRadians(angle)) * 0.15); shipPosition.addX(dx); shipPosition.addY(dy); I would like to be able to add the ship's velocity to the bullet's velocity, but I'm a little confused as to how should get the speed from the ship's vector. I thought that adding the ship's dx to the bullet's dx like int dx = -(cos(degreesToRadians(ship.angle)) * speed * dx) would work because I'm adding the rate of change of the ship to the rate of change of the bullet, but that doesn't work. So here's the final question: How can I get the speed of my ship and apply it to my bullet's speed? Thanks in advance for all help :)

    Read the article

  • Is there a command to "manually automount" an attached disk?

    - by cheshirekow
    I have an extra hard drive which I use for backups. The label on its one and only partition is "backup". When I open nautilus and click on "backup" it mounds the drive in "/media/backup", and then there's a little eject button next to it's icon in nautilus. If I manually mount the drive by creating a directory and using "sudo mount /dev/sdx /some/dir", the eject icon still shows up in nautilus, but when I press it I get an error because the device was not mounted via whatever it is that mounts it the other way. What I would like is to be able to do this "mount to /media/backup and enable the eject button" via the command line. The goal is to have the device mounted by a script which needs the drive, but then leave it mounted until I manually eject it... if I want to. P.S. I'm aware that I can have the drive auto mounted at startup, but that's not what I'm looking for here, and I'd like to know if this is possible. Clarification: I'm looking for a command to "mount the drive the way nautilus would". This should create the directory "/media/backup", mount the device to that directory, and then when I press the eject button from nautilus, it should unmount the device and delete the directory.

    Read the article

  • Build vs Rebuild

    - by prash
    Build means compile and link only the source files that have changed since the last build, while Rebuild means compile and link all source files regardless of whether they changed or not. Build is the normal thing to do and is faster. Sometimes the versions of project target components can get out of sync and rebuild is necessary to make the build successful. In practice, you never need to Clean. Build or Rebuild Solution builds or rebuilds all projects in the your solution, while Build or Rebuild <project name> builds or rebuilds the StartUp project. To set the StartUp project, right click on the desired project name in the Solution Explorer tab and select Set as StartUp project. The project name now appears in bold. Compile just compiles the source file currently being edited. Useful to quickly check for errors when the rest of your source files are in an incomplete state that would prevent a successful build of the entire project. Ctrl-F7 is the shortcut key for Compile. All source files that have changed are saved when you request a build/rebuild, so you don't have to save them first. When you run your executable (F5 or Ctrl-F5), Visual Studio saves all your changed source files and builds anything that changed, so you don't need to explicitly do those steps every time. This allows for quick "trial and error" debugging. Incidentally, if you like those little Visual Studio keyboard shortcuts, you can download posters of the C# and the VB.Net ones, respectively (I am personally a big fan of using keyboard shortcuts :) ).   Visual Studio 2010 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=92ced922-d505-457a-8c9c-84036160639f   Visual Studio 2005 C#: http://www.microsoft.com/downloads/details.aspx?FamilyID=c15d210d-a926-46a8-a586-31f8a2e576fe&DisplayLang=en VB.NET: http://www.microsoft.com/downloads/details.aspx?FamilyID=6bb41456-9378-4746-b502-b4c5f7182203&DisplayLang=en

    Read the article

  • What do YOU want to see in a SharePoint jQuery Session?

    - by Mark Rackley
    Hey party people. So, as you have probably realized by now, I’ve been using quite a bit of jQuery with SharePoint. It’s pretty amazing what you can actually accomplish with a little stubbornness and some guidance from the gurus. Well, it looks like I’ll be putting together a SharePoint jQuery session that I will be presenting at a few conferences. This is such a big and broad topic I could speak on it for hours! So, I need YOUR assistance to help me narrow down what I’ll be focusing on. Some ideas I have are: How to even get started; how to set up SharePoint to work with jQuery What third party libraries exist out there that integrate well with SharePoint How to interact with default SharePoint forms and jQuery (cascading dropdowns, disabling fields, etc..) What is SPServices and how can you use it When should you NOT use jQuery What do YOU want to see though? This session is for YOU guys, not for me. Please take a moment to leave a comment below and let me know what you would like to see and learn. Thanks, and I look forward to seeing you in my sessions!! Mark

    Read the article

  • How to decide how backward-compatible my new Mac OS X application should be?

    - by haimg
    I'm currently contemplating writing an OS X version of my Windows software. My Windows application still supports Windows XP, and I know that if I drop support for it now, our customers will cry bloody murder. I'm new to OS X development, and as I learn the technology, APIs, etc., I realized that if I'm going to provide comparable level of backward compatibility (e.g. down to OS X 10.5), I would not be able to use many things that look very useful and relevant in my case (ARC, XPC communications, many others). This is quite different from Windows, in my opinion, where there are very little changed between Windows XP and Windows 7 from desktop application developer's standpoint. So, on one hand, it seems like a complete waste to stick to 2007 or 2009-level API in 2012. On the other hand, according to NetMarketShare report and Stat Owl report Mac OS X 10.5 and 10.6 market share is still 11% and 35%-40% respectively. However, I'm not sure if these older OS users are my target audience (buyers of software utilities) if they didn't bother to upgrade their OS... My question: Are there any other reasons I should take into account when deciding if I target 10.5 or 10.6 or 10.7 for a new application?

    Read the article

  • Companies and Ships

    - by TechnicalWriting
    I have worked for small, medium, large, and extra large companies and they have something in common with ships. These metaphors have been used before, I know, but I will have a go at them.The small company is like a speed boat, exciting and fast, and can turn on a dime, literally. Captain and crew share a lot of the work. A speed boat has a short range and needs to refuel a lot. It has difficulty getting through bad weather. (Small companies often live quarter to quarter. By the way, if a larger company is living quarter to quarter, it is taking on water.)The medium company is is like a battleship. It can maneuver, has a longer range, and the crew is focused on its mission. Its main concern are the other battleships trying to blow it out of the water, but it can respond quickly. Bad weather can jostle it, but it can get through most storms.The large company is like an aircraft carrier; a floating city. It is well-provisioned and can carry a specialized load for a very long range. Because of its size and complexity, it has to be well-organized to be effective and most of its functions are specialized (with little to no functional cross-over). There are many divisions and layers between Captain and crew. It is not very maneuverable; it has to set its course well in advance and have a plan of action.The extra large company is like a cruise liner. It also has to be well-organized and changes in direction are often slow. Some of the people are hard at work behind the scenes to run the ship; others can be along for the ride. They sail the same routes over and over again (often happily) with the occasional cosmetic face-lift to the ship and entertainment. It should stay in warm, friendly waters and avoid risky speed through fields of ice bergs.I have enjoyed my career on the various Ships of Technical Writing, but I get the most of my juice from the battleship where I am closer to the campaign and my contributions have the greater impact on success.Mark Metcalfewww.linkedin.com/in/MarkMetcalfe

    Read the article

  • can't get a good install:11.10 server

    - by jack
    I screwed up my partitioning aparently tring to get lvm and raid1 going. the machine is an intel dual core dt with 2 gig of ram and 2 sata drives, one 250g and the other 500g. This a build for my school in n.e. Thailand. we have 20+ clients now, a website, email. Our old server is dying fast and we are going to add another 12 stations next week. I really need some help here! 1. have onboard gigabit ethernet that aparently uses same driver as realtek 811c. I installed a pcie gigabit card also 811c. At several points the eth0 has accessed the internet fine, but the eth1 will not communicate. 2. I saw a "fix" for this online which from root: rmmod r8169. this imediately killed the working onboard card. 3.I tried to re-install 11.10 figuring that would re-install r8169. However I messed something up in my partitioning and can't get a clean boot now. 6. so I think after 12 re-installs or so and 2 days. I can get through it right if I can start over with clean drives, but I can't figure out how to empty them out what with soft raid and lvm partitions. seems like i've had it going well and then trying to fix that one little problem, i go backwards.Please help! please send email.-thanks

    Read the article

  • Why does Unity not extend to my 2nd monitor, even when it is displaying an X-Screen?

    - by Gridwalker
    I recently added a 2nd video card to my system, but unity refuses to extend my desktop over to the second screen. Although the secondary monitor initialises when I boot and I can move the mouse cursor over to the 2nd screen, the screen is otherwise blank (showing no wallpaper or interface elements) and I am unable to move any windows to this monitor. Moving the mouse cursor over to the 2nd monitor changes it from the default cursor to the old-style X cursor, such as the one that appears when you run X-kill, indicating that this screen is initialised in the X Server but that Unity is not recognising it. Although the Nvidia X Server Settings application can see both monitors, the unity systems settings application does not detect the 2nd adapter. Sometimes the additional drivers application can see both adapters, but it doesn't consistently show options for them both. Xrandr also fails to detect the 2nd monitor, but iNex lists both adapters. I have experimented with several different drivers for each adapter and with setting each of the graphics cards as the primary adapter in the BIOS, but this has made little difference. The two adapters are an onboard Geforce 8200 and a PCIE Geforce 7200 GX. The onboard adapter is currently set as the primary, however this adapter crashes whenever I use the Nouveau driver and I have to switch over to the PCIE as a primary whenever I purge the proprietary drivers (switching back when the 304 driver has been reinstalled). It doesn't matter which adapter I set as my primary, the results are the same : one screen showing the unity interface and one screen showing an X-Screen that only displays the mouse cursor. All I want is to be able to run this system in a dual screen configuration. I am not a gamer, nor do I require 3D rendering capabilities. Anything you can suggest to get the desktop to extend across both screens will be massively appreciated!

    Read the article

  • Remember way back when we had a free decompiler?

    - by Justin Jones
    I, like probably so many of the rest of you, was mortified when Reflector was sold to RedGate. I knew where it was going. Suddenly you had to install it instead of just download and run it. I had a deep down feeling that one of the most useful tools in my arsenal was about to become a corporate product and no longer belong to the world of free tools. Sure enough it did. For a while now I’ve limped by without my favorite decompiler. This was made a little easier by the fact that you can now debug into the .net framework, but I still missed Reflector. JetBrains, makers of the superawesome and well worth the cost ReSharper (no it’s not free) have made their own decompiler that is comparable with Reflector, and it’s free. It’s still a corporate product, and JetBrains isn’t exactly known for making free software, but for now we have an option back on the table until some other industrious developer makes the next Reflector. dotPeek can be downloaded here.  http://www.jetbrains.com/decompiler/

    Read the article

  • I've inherited 200K lines of spaghetti code -- what now?

    - by kmote
    I hope this isn't too general of a question; I could really use some seasoned advice. I am newly employed as the sole "SW Engineer" in a fairly small shop of scientists who have spent the last 10-20 years cobbling together a vast code base. (It was written in a virtually obsolete language: G2 -- think Pascal with graphics). The program itself is a physical model of a complex chemical processing plant; the team that wrote it have incredibly deep domain knowledge but little or no formal training in programming fundamentals. They've recently learned some hard lessons about the consequences of non-existant configuration management. Their maintenance efforts are also greatly hampered by the vast accumulation of undocumented "sludge" in the code itself. I will spare you the "politics" of the situation (there's always politics!), but suffice to say, there is not a consensus of opinion about what is needed for the path ahead. They have asked me to begin presenting to the team some of the principles of modern software development. They want me to introduce some of the industry-standard practices and strategies regarding coding conventions, lifecycle management, high-level design patterns, and source control. Frankly, it's a fairly daunting task and I'm not sure where to begin. Initially, I'm inclined to tutor them in some of the central concepts of The Pragmatic Programmer, or Fowler's Refactoring ("Code Smells", etc). I also hope to introduce a number of Agile methodologies. But ultimately, to be effective, I think I'm going to need to hone in on 5-7 core fundamentals; in other words, what are the most important principles or practices that they can realistically start implementing that will give them the most "bang for the buck". So that's my question: What would you include in your list of the most effective strategies to help straighten out the spaghetti (and prevent it in the future)?

    Read the article

< Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >