Search Results

Search found 28682 results on 1148 pages for 'drop down menu'.

Page 691/1148 | < Previous Page | 687 688 689 690 691 692 693 694 695 696 697 698  | Next Page >

  • Turn off Windows Defender on your builds

    - by george_v_reilly
    I've spent some time this evening profiling a Python application on Windows, trying to find out why it was so much slower than on Mac or Linux. The application is an in-house build tool which reads a number of config files, then writes some output files. Using the RunSnakeRun Python profile viewer on Windows, two things immediately leapt out at me: we were running os.stat a lot and file.close was really expensive. A quick test convinced me that we were stat-ing the same files over and over. It was a combination of explicit checks and implicit code, like os.walk calling os.path.isdir. I wrote a little cache that memoizes the results, which brought the cost of the os.stats down from 1.5 seconds to 0.6. Figuring out why closing files was so expensive was harder. I was writing 77 files, totaling just over 1MB, and it was taking 3.5 seconds. It turned out that it wasn't the UTF-8 codec or newline translation. It was simply that closing those files took far longer than it should have. I decided to try a different profiler, hoping to learn more. I downloaded the Windows Performance Toolkit. I recorded a couple of traces of my application running, then I looked at them in the Windows Performance Analyzer, whereupon I saw that in each case, the CPU spike of my app was followed by a CPU spike in MsMpEng.exe. What's MsMpEng.exe? It's Microsoft's antimalware engine, at the heart of Windows Defender. I added my build tree to the list of excluded locations, and my runtime halved. The 3.5 seconds of file closing dropped to 60 milliseconds, a 98% reduction. The moral of this story is: don't let your virus checker run on your builds.

    Read the article

  • SEO - Index images (lazyload)

    - by Guilherme Nascimento
    Note:My question is not about Javascript. I'm developing a plugin for jQuery/Mootols/Prototype, that work with DOM. This plugin will be to improve page performance (better user experience). The plugin will be distributed to other developers so that they can use in their projects. How does the lazyload: The images are only loaded when you scroll down the page (will look like this: http://www.appelsiini.net/projects/lazyload/enabled_timeout.html LazyLoad). But he does not need HTML5, I refer to this attribute: data-src="image.jpg" Two good examples of website use LazyLoad are: youtube.com (suggested videos) and facebook.com (photo gallery). I believe that the best alternative would be to use: <A href="image.jpg">Content for ALT=""</a> and convert using javascript, for this: <IMG alt="Content for ALT=\"\"" src="image.jpg"> Then you question me: Why do you want to do that anyway? I'll tell you: Because HTML5 is not supported by any browser (especially mobile) And the attribute data-src="image.jpg" not work at all Indexers. I need a piece of HTML code to be fully accessible to search engines. Otherwise the plugin will not be something good for other developers. I thought about doing so to help in indexing: <noscript><img src="teste.jpg"></noscript> But noscript has negative effect on the index (I refer to the contents of noscript) I want a plugin that will not obstruct the image indexing in search engines. This plugin will be used by other developers (and me too). This is my question: How to make a HTML images accessible to search engines, which can minimize the requests?

    Read the article

  • Programming by dictation?

    - by Andrew M
    ie. you speak out the code, and someone else across the room types it in Anyone tried this? Obviously the person taking the dictation would need to be a coder too, so you didn't have to explain everything and go into tedious detail (not 'open bracket, new line...' but more like 'create a new class called myParser that takes three arguments, first one is...'). I thought of it because sometimes I'm too easily distracted at my computer. Surrounded by buttons, instant gratification a click away, the world at my fingertips. To get stuff done, I want to get away, write my code on paper. But that would mean losing access to necessary resources, and necessitate tedious typing-up later on. The solution? Dictate. Pros: no chance to check reddit, stackexchange, gmail, etc. code while you pace the room, lie down, play billiards, whatever train your brain to think more abstractedly (have to visualize things if you can't just see the screen) skip the tedious details (closing brackets etc.) the typist gets to shadow a more experienced programmer and learn how they work the typist can provide assistance/suggestions external pressure of typist expecting instructions, urging you to stay focussed Cons might be too hard might not work any better rather inefficient use of assisting programmer need to find/pay someone to do this

    Read the article

  • Apache server in raspberry PI not visible from outside( public IP)

    - by Kronos
    I have made a fresh install of Arch Linux ARM into a Raspberry PI and I mounted there a LAMP, all fresh. I have another Arch(x86) in my laptop with Apache also there, and as far as I know, two web servers cannot run in the same network segment so, the problem is as follows. I my laptop, having Apache running, if I enter via the public ip of my network everything turns ok and I can see my website but, (obviously turning this server down) if I enter from the public IP with the Apache running in the raspberry pi( yes, only that Apache running) i cannot see my website in there. Also, if I access via local network it is a normal success, I can see my website. So, I can enter my raspberry website only via local but in my other web server i can enter it via local and public. I have the same conf files in both of them so what is the difference? I was planning in making the rpi as a development server. Thanks in advance

    Read the article

  • OpenGL - What softwares are needed to learn Open GL on Osx

    - by sugar
    I strongly believe that - my question isn't related to programing, so that I asked my question here ( super user ). Here, I am not asking what links should I follow to learn OpenGL. I am asking that - What software are needed for learning Open Gl? Let me explain more briefly, I want to learn openGL for iPhone game development. Tools for creating 3D objects which are preferable tools are targeted for osx would be preferable. In short I would like to know, The list of application which are used by iPhone professional game developers on osx. Thanks in advance for sharing your knowledge. Please add comment before down voting. Sagar.

    Read the article

  • Java web app, with plugin framework and ability to connect to source for updates

    - by lessthancommon
    I've searched all around for some good sources, but either have been searching for the wrong keywords, or I'm just missing something. I'm looking to redevelop a web app I've been using for some time now. Many parts are out of date, and we're constantly throwing in little hacks to attempt to give it new life. So what I'd like to do is re-engineer it from the ground up, built on some sort of plug-in framework. Before I continue, I'm more or less an intermediate Java programmer. In some ways, I'm hoping to use this project as a big learning experience. I've read a lot about OSGi, and it seems that's the most complete framework. Ideally, I would like an end result web app which I can run one instance as my hosting environment, and other instances can connect to it to grab new and updated plug-ins. Eventually I'll want to lock down these plug-ins based on some undecided criteria of who can get them (basically some will simply be updates, others will provide new functionality and should be "purchased" through an external system). But that will probably be handled in a later phase. There should be an administration view for managing bundles in a hot environment (looking to avoid having to restart the server for an update). I know all these things are possible, I'm just trying to find some good resources for reference. All the OSGi tutorials I'm finding seem to be too simplistic. If anyone here can guide me in the right direction on any or all of the items I'm looking for, it would be much appreciated. Also, this is my first post, so I'll take any comments/criticisms about the content of my post. Thanks!

    Read the article

  • Do MORE with WebCenter

    - by Michael Snow
    WEBCAST THURSDAY!! 03/22/12 Do you need to lower costs? Raise Productivity? Foster Innovation? Improve Online Engagement? But you’re still stuck with Documentum? Step away from the ledge – there is hope – let us help you. Top 4 Content Imperatives · Lower Costs - Reduce labor, maintenance fees, storage and electrical consumption · Raise Productivity - Automation and integration, communication, findability · Foster Innovation - Enable collaboration, expertise location · Improve Online Engagement – enable user-driven, dynamic marketing initiatives With the coming technology wave we see four content imperatives. Every organization has had to reduce costs, cost cutting has become a way of life. Everyone is working three jobs as positions are eliminated. And so we have to reduce labor, reduce maintenance, and reduce money we are wasting on things like storing content that is redundant or no longer useful. We also, to fill that gap, need to raise productivity. Knowledge workers represent the fastest growing segment of the workforce, accounting for 40%-75% of the employees at organizations in sectors like financial services, life sciences, healthcare and retail.  What’s more, their wages total 18 percent of the United States GDP. And so we can’t afford information systems that don’t let our top performers be the best they can be. We look to automate the content processes, provide ways to integrate that content into our processes, provide communication to make decisions, and to make content more findable so people can make the right decision and move the process forward. And really to get ourselves out of the current financial status, we can only cut costs so far. We have to innovate out of economic tough times – to find new products and new markets. And to enable the innovation process, we have to enable collaboration and expertise location. So much of innovation is about building on innovations that have come before. To solve problems, we have to be able to find what our organization has already created. We find that problems we need to solve have already been solved if we can find the right document, the right person. So we have to provide systems that enable us to stand on the shoulders of our organization’s accomplishments. Good content drives great marketing. Online engagement is growing as an absolute necessity for modern growing marketing organizations that require the business users be enabled for dynamic marketing content creation, updates and targeted content creation and management. Unfortunately – if you are currently stuck with Documentum, you are really lacking in your Web Experience Management capabilities. Documentum previously used FatWire for web publishing. Now FatWire is part of Oracle. Oracle provides powerful web engagement capabilities: Increase sales and loyalty by optimizing online engagement Create, manage and moderate contextually relevant, targeted and interactive online experiences Optimize customer engagement across, web, mobile and social channels Manage large scale multichannel global online presence with integration to enterprise applications Enable business users to control their content and make their own updates Publish content from native files – enable navigation of project documents, procedures, policy information Enable content display and updates from existing web applications – one click to drag and drop content management functionality So you get the ability to self-publish information and make it navigable, to move the process of publishing from IT to business users, and the ability to address a whole new area of user engagement with web experience management. So… if you are still stuck with Documentum and don’t know what to do – contact us – not only will Oracle help you step away from the ledge, but also with the MoveOff Documentum program, we are offering you a way – trade-in your Documentum licenses for a 100% credit on Oracle WebCenter. How’s that for a nice bonus? It’s time to stop maintaining Documentum, and to start innovating with Oracle WebCenter. Learn More Here! To learn more about what Oracle WebCenter can offer you today – join us for a webcast – your eyes will be opened to all that’s possible. Do More with WebCenter: Extend Beyond Content Management

    Read the article

  • What amount of physical RAM would a typical "commodity class" server have, as of late 2013?

    - by marathon
    I'm trying to spec out servers for my company's infrastructure group to build. They tell me anything more than 2GB is too much, which I find ridiculous considering cheap DRAM is about 15 bucks a dimm in bulk and our particular software runs better with more memory. I tried to find out how much google servers use, and pinning down a number is hard. Best I could find in a google research paper was that in 2008, their commodity servers were using 2 and 4GB dimms, but the paper never said how many. I realize "commodity server" is a vague term, but I'm just looking for a rough range in RAM used. I suspect at least 16GB is going to be the norm.

    Read the article

  • Free Developer Day - Hands-on Oracle 11g Applications Development

    - by [email protected]
    Spend a day with us learning the key tools, frameworks, techniques, and best practices for building database-backed applications. Gain hands-on experience developing database-backed applications with innovative and performance-enhancing methods. Meet, learn from, and network with Oracle database application development experts and your peers. Get a chance to win a Flip video camera and Oracle prizes, and enjoy post-event benefits such as advanced lab content downloads.Bring your own laptop (Windows, Linux, or Mac with minimum 2Gb RAM) and take away scripts, labs, and applications*.Space is limited. "Register Now"  for this FREE event. Don't miss your exclusive opportunity to meet with Oracle application development & database experts, win Oracle Trainings, and discuss today's most vital application development topics.          Win two Oracle Trainings valued in $2500 each. Offered by SDT Learning Corp·         Oracle Application Express: Developing Web Applications (duración de 4 días)·         Oracle Fusion Middleware 11g: Java Programming Ed 1.1 (duración de 5 días)You can also be registered Calling to Jamielle Gandía at 787-999-3187Requirements by TrackFor .Net Track1) A windows machine with 2 GB memory2) Attendees must in advance of the show, download and install VMWare player:       http://www.vmware.com/products/player/3) Attendees should test their machine to make sure they can run an executable on an external USB hard drive (some corporate machines are locked down so they cannot do this)For Java TrackYou will save time if you install these applications in advance:1) A windows machine with 2 GB memory2) VirtualBox must be installed in each laptopWhat is virtual box? Where can I download it?For APEX Track1) A windows machine with 2 GB memoryOracle Corporate agenda @  HereNote:  (Limited to 50 people per Track)

    Read the article

  • How to install PHP, Pear, PECL, and APC with Homebrew on Mac OS X?

    - by Andrew
    I'm trying to install APC for PHP 5.3 in the easiest way possible. I love Homebrew so I started down that route. I was able to install PHP 5.3.6 with this command: brew install https://github.com/adamv/homebrew-alt/raw/master/duplicates/php.rb --with-mysql I think this is supposed to install PHP, Pear, and PECL. It seems to install these just fine. Now when I try to install APC: $ pecl install apc downloading APC-3.1.9.tgz ... Starting to download APC-3.1.9.tgz (155,540 bytes) .................................done: 155,540 bytes Warning: require_once(Archive/Tar.php): failed to open stream: No such file or directory in PackageFile.php on line 305 Warning: require_once(Archive/Tar.php): failed to open stream: No such file or directory in /usr/local/Cellar/php/5.3.6/lib/php/PEAR/PackageFile.php on line 305 Fatal error: require_once(): Failed opening required 'Archive/Tar.php' (include_path='/usr/local/Cellar/php/5.3.6/lib/php') in /usr/local/Cellar/php/5.3.6/lib/php/PEAR/PackageFile.php on line 305 How can I fix this?

    Read the article

  • Dumping a Linux console scrollback buffer?

    - by Gerald Combs
    We would like to save the output of a program run on a Linux console which spans many lines. Unfortunately it wasn't logged or run under screen, or any other way that lets us easily capture the output. The best method we've been able to come up with so far is: Log into the machine via a separate SSH session In the console session, page to the top of the buffer Repeat: In the SSH session, run "cat /dev/vcs >> screendump.txt" In the console session, page down one screen Dump the final screen in the SSH session Is there a better way? It seems like if the VC memory were contiguous and you knew where it was you could use dd to pull the console text directly out of kernel memory and into a file.

    Read the article

  • Running .net application over a network

    - by Marlon
    Hello, I need some advice please. I need to enable a .Net application to run over a network share, the problem is that this will be on clients network shares and so the path will not be identical. I've had a quick look at ClickOnce and the publish options in VS2008 but it wants a specific network share location - and I'm assuming this location gets stored somewhere when it does its thing. At the moment the job is being done with a old VB6 application and so gets around all these security issues, but that application is poorly written and almost impossible to maintain so it really needs to go. Is it possible for the domain controller to be set up to allow this specific .Net application to execute? Any other options would be welcomed as I want to get this little application is very business critical. I aught to say that the client networks are schools, and thus are often quite locked down as are the client machines, so manually adding exceptions to each client machine is a big no no. Marlon

    Read the article

  • Oracle SQL Developer: Fetching SQL Statement Result Sets

    - by thatjeffsmith
    Running queries, browsing tables – you are often faced with many thousands, if not millions, of rows. Most people are happy with looking at the first few rows. But occasionally you need to see more. SQL Developer doesn’t show you all records, all at once. Instead, it brings the records down in ‘chunks,’ or as-needed. How It Works There is a preference that tells SQL Developer how many records to get in a single request, or ‘fetch’ of records. The default is 50… So if I run a query that returns MORE than 50 rows: There’s more than 50 records in this resultset, but we have 50 in the grid to start with. We don’t know how many records are in this result set actually. To show the record count here, we actually go physically query the database with a row count type query. All we know is that the query has finished executing, and that there are rows available to go fetch. It tells us when it’s done. As you scroll through the grid, if you get to record 50 and scroll more, we’ll get 50 more records. Or, you can cheat to get to the ‘bottom’ of the result set. You can ask SQL Developer to just to get all the records at once… Once all the records have been fetched, you’ll see this: All rows fetched! A word of caution There’s a reason we have the default set to 50 and not 1000. Bringing back data can get expensive and heavy. We’ve found the best performance to be found in that 50 to 200 record range.

    Read the article

  • Slick2D + LWJGL collision system

    - by Connor W
    So I've been learning java for a while and have explored slick and lwjgl before but went away from using Slick for a while. But I've recently gone back to using it (as I'm making a platformer and Tiled will be really helpful). But here's where my problems begin: collision. I have a player polygon and I check to see if it's colliding with my tiled map with this method: public static boolean playerCollisionWith() { for(int i = 0; i < Blockmap.entities.size(); i++) { Block entity1 = (Block) Blockmap.entities.get(i); if(playerPoly.intersects(entity1.poly)) { return true; } } return false; } This would work normally but I'm using a different method for movement. Instead of just adding a speed variable to the player's x axis. I move like this: if(Keyboard.isKeyDown(Keyboard.KEY_RIGHT)) { speedX = Math.min(5, speedX + 1); moving = true; playerPoly.setX(x); if(playerCollisionWith()) { speedX = -5; playerPoly.setX(x); } } That Math.min call is what is messing me up =. I can't just call speedX = -5, because when I do the player "bounces" when the right mouse button is down and it's colliding. Bounces as in flashes back and forth REALLY quickly. But I don't really know how I would make it so that collisions on the y axis would work either, whether the player is jumping or not. So if I could get some help with how to fix this problem that would be great. Thank you for the help!

    Read the article

  • Proper management of PGPool II

    - by Cathy
    Currently I have a site, with one Postgres database server. It is just for a select number of users (less than ten) but it needs the maximum uptime possible. I would like some kind of automatic failover for the database. So I was thinking something like: one server running PGPool II, one running Postgres as master, one running Postgres as slave. But then, if wherever PGPool is running suddenly loses power (or dies, or whatever), there's a single point of failure and the whole thing goes down. Is there a solution, assuming that outsourcing this to someone else isn't possible?

    Read the article

  • Code Reuse is (Damn) Hard

    - by James Michael Hare
    Being a development team lead, the task of interviewing new candidates was part of my job.  Like any typical interview, we started with some easy questions to get them warmed up and help calm their nerves before hitting the hard stuff. One of those easier questions was almost always: “Name some benefits of object-oriented development.”  Nearly every time, the candidate would chime in with a plethora of canned answers which typically included: “it helps ease code reuse.”  Of course, this is a gross oversimplification.  Tools only ease reuse, its developers that ultimately can cause code to be reusable or not, regardless of the language or methodology. But it did get me thinking…  we always used to say that as part of our mantra as to why Object-Oriented Programming was so great.  With polymorphism, inheritance, encapsulation, etc. we in essence set up the concepts to help facilitate reuse as much as possible.  And yes, as a developer now of many years, I unquestionably held that belief for ages before it really struck me how my views on reuse have jaded over the years.  In fact, in many ways Agile rightly eschews reuse as taking a backseat to developing what's needed for the here and now.  It used to be I was in complete opposition to that view, but more and more I've come to see the logic in it.  Too many times I've seen developers (myself included) get lost in design paralysis trying to come up with the perfect abstraction that would stand all time.  Nearly without fail, all of these pieces of code become obsolete in a matter of months or years. It’s not that I don’t like reuse – it’s just that reuse is hard.  In fact, reuse is DAMN hard.  Many times it is just a distraction that eats up architect and developer time, and worse yet can be counter-productive and force wrong decisions.  Now don’t get me wrong, I love the idea of reusable code when it makes sense.  These are in the few cases where you are designing something that is inherently reusable.  The problem is, most business-class code is inherently unfit for reuse! Furthermore, the code that is reusable will often fail to be reused if you don’t have the proper framework in place for effective reuse that includes standardized versioning, building, releasing, and documenting the components.  That should always be standard across the board when promoting reusable code.  All of this is hard, and it should only be done when you have code that is truly reusable or you will be exerting a large amount of development effort for very little bang for your buck. But my goal here is not to get into how to reuse (that is a topic unto itself) but what should be reused.  First, let’s look at an extension method.  There’s many times where I want to kick off a thread to handle a task, then when I want to reign that thread in of course I want to do a Join on it.  But what if I only want to wait a limited amount of time and then Abort?  Well, I could of course write that logic out by hand each time, but it seemed like a great extension method: 1: public static class ThreadExtensions 2: { 3: public static bool JoinOrAbort(this Thread thread, TimeSpan timeToWait) 4: { 5: bool isJoined = false; 6:  7: if (thread != null) 8: { 9: isJoined = thread.Join(timeToWait); 10:  11: if (!isJoined) 12: { 13: thread.Abort(); 14: } 15: } 16: return isJoined; 17: } 18: } 19:  When I look at this code, I can immediately see things that jump out at me as reasons why this code is very reusable.  Some of them are standard OO principles, and some are kind-of home grown litmus tests: Single Responsibility Principle (SRP) – The only reason this extension method need change is if the Thread class itself changes (one responsibility). Stable Dependencies Principle (SDP) – This method only depends on classes that are more stable than it is (System.Threading.Thread), and in itself is very stable, hence other classes may safely depend on it. It is also not dependent on any business domain, and thus isn't subject to changes as the business itself changes. Open-Closed Principle (OCP) – This class is inherently closed to change. Small and Stable Problem Domain – This method only cares about System.Threading.Thread. All-or-None Usage – A user of a reusable class should want the functionality of that class, not parts of that functionality.  That’s not to say they most use every method, but they shouldn’t be using a method just to get half of its result. Cost of Reuse vs. Cost to Recreate – since this class is highly stable and minimally complex, we can offer it up for reuse very cheaply by promoting it as “ready-to-go” and already unit tested (important!) and available through a standard release cycle (very important!). Okay, all seems good there, now lets look at an entity and DAO.  I don’t know about you all, but there have been times I’ve been in organizations that get the grand idea that all DAOs and entities should be standardized and shared.  While this may work for small or static organizations, it’s near ludicrous for anything large or volatile. 1: namespace Shared.Entities 2: { 3: public class Account 4: { 5: public int Id { get; set; } 6:  7: public string Name { get; set; } 8:  9: public Address HomeAddress { get; set; } 10:  11: public int Age { get; set;} 12:  13: public DateTime LastUsed { get; set; } 14:  15: // etc, etc, etc... 16: } 17: } 18:  19: ... 20:  21: namespace Shared.DataAccess 22: { 23: public class AccountDao 24: { 25: public Account FindAccount(int id) 26: { 27: // dao logic to query and return account 28: } 29:  30: ... 31:  32: } 33: } Now to be fair, I’m not saying there doesn’t exist an organization where some entites may be extremely static and unchanging.  But at best such entities and DAOs will be problematic cases of reuse.  Let’s examine those same tests: Single Responsibility Principle (SRP) – The reasons to change for these classes will be strongly dependent on what the definition of the account is which can change over time and may have multiple influences depending on the number of systems an account can cover. Stable Dependencies Principle (SDP) – This method depends on the data model beneath itself which also is largely dependent on the business definition of an account which can be very inherently unstable. Open-Closed Principle (OCP) – This class is not really closed for modification.  Every time the account definition may change, you’d need to modify this class. Small and Stable Problem Domain – The definition of an account is inherently unstable and in fact may be very large.  What if you are designing a system that aggregates account information from several sources? All-or-None Usage – What if your view of the account encompasses data from 3 different sources but you only care about one of those sources or one piece of data?  Should you have to take the hit of looking up all the other data?  On the other hand, should you have ten different methods returning portions of data in chunks people tend to ask for?  Neither is really a great solution. Cost of Reuse vs. Cost to Recreate – DAOs are really trivial to rewrite, and unless your definition of an account is EXTREMELY stable, the cost to promote, support, and release a reusable account entity and DAO are usually far higher than the cost to recreate as needed. It’s no accident that my case for reuse was a utility class and my case for non-reuse was an entity/DAO.  In general, the smaller and more stable an abstraction is, the higher its level of reuse.  When I became the lead of the Shared Components Committee at my workplace, one of the original goals we looked at satisfying was to find (or create), version, release, and promote a shared library of common utility classes, frameworks, and data access objects.  Now, of course, many of you will point to nHibernate and Entity for the latter, but we were looking at larger, macro collections of data that span multiple data sources of varying types (databases, web services, etc). As we got deeper and deeper in the details of how to manage and release these items, it quickly became apparent that while the case for reuse was typically a slam dunk for utilities and frameworks, the data access objects just didn’t “smell” right.  We ended up having session after session of design meetings to try and find the right way to share these data access components. When someone asked me why it was taking so long to iron out the shared entities, my response was quite simple, “Reuse is hard...”  And that’s when I realized, that while reuse is an awesome goal and we should strive to make code maintainable, often times you end up creating far more work for yourself than necessary by trying to force code to be reusable that inherently isn’t. Think about classes the times you’ve worked in a company where in the design session people fight over the best way to implement a class to make it maximally reusable, extensible, and any other buzzwordable.  Then think about how quickly that design became obsolete.  Many times I set out to do a project and think, “yes, this is the best design, I can extend it easily!” only to find out the business requirements change COMPLETELY in such a way that the design is rendered invalid.  Code, in general, tends to rust and age over time.  As such, writing reusable code can often be difficult and many times ends up being a futile exercise and worse yet, sometimes makes the code harder to maintain because it obfuscates the design in the name of extensibility or reusability. So what do I think are reusable components? Generic Utility classes – these tend to be small classes that assist in a task and have no business context whatsoever. Implementation Abstraction Frameworks – home-grown frameworks that try to isolate changes to third party products you may be depending on (like writing a messaging abstraction layer for publishing/subscribing that is independent of whether you use JMS, MSMQ, etc). Simplification and Uniformity Frameworks – To some extent this is similar to an abstraction framework, but there may be one chosen provider but a development shop mandate to perform certain complex items in a certain way.  Or, perhaps to simplify and dumb-down a complex task for the average developer (such as implementing a particular development-shop’s method of encryption). And what are less reusable? Application and Business Layers – tend to fluctuate a lot as requirements change and new features are added, so tend to be an unstable dependency.  May be reused across applications but also very volatile. Entities and Data Access Layers – these tend to be tuned to the scope of the application, so reusing them can be hard unless the abstract is very stable. So what’s the big lesson?  Reuse is hard.  In fact it’s damn hard.  And much of the time I’m not convinced we should focus too hard on it. If you’re designing a utility or framework, then by all means design it for reuse.  But you most also really set down a good versioning, release, and documentation process to maximize your chances.  For anything else, design it to be maintainable and extendable, but don’t waste the effort on reusability for something that most likely will be obsolete in a year or two anyway.

    Read the article

  • Convert Chrome Bookmark Toolbar Folders to Icons

    - by Asian Angel
    So you have your regular bookmarks reduced to icons but what about the folders? With our little hack and a few minutes of your time you can turn those folders into icons too. Condensing the Folders Reducing bookmark folders to icons is a little more tricky than regular bookmarks but not hard to do. Right click on the folder and select “Rename…”. The folder’s name should already be highlighted/selected as shown here. Delete the text…notice that the “OK Button” has become unusable for the moment. Now what you will need to do is: Hold down the “Alt Key” Type in “0160” (without the quotes) using the numbers keypad on the right side of your keyboard Release the “Alt Key” after you have finished typing in the number above Once you have released the “Alt Key” you will notice two things…the “cursor” has moved further into the text area and you can now click on the “OK Button” again. There is our folder after editing. And it works just as well as before but without taking up so much room. Here is how our “iconized” folder looks next to our bookmarks. Perfect! What if you want to reduce multiple folders to icons? Perform the same exact steps shown above for each folder and pack your “Bookmarks Toolbar” full of folder goodness! As seen here the folders will have a little more space between them in comparison with singular bookmarks due to the “blank name” for each folder. For those who may be curious this is what your bookmarks will look like in the “Bookmark Manager Page”. Note: If you export your bookmarks all bookmarks contained in multiple blank name folders will be combined into a single folder. Conclusion With just a little bit of work you can pack a lot of goodness into your “Bookmarks Toolbar”. No more wasted space… Similar Articles Productive Geek Tips Condense the Bookmarks in the Firefox Bookmarks ToolbarAccess Your Bookmarks with a Toolbar Button in Google ChromeAdd the Bookmarks Menu to Your Bookmarks Toolbar with Bookmarks UI ConsolidatorAdd a Vertical Bookmarks Toolbar to FirefoxReduce Your Bookmarks Toolbar to a Toolbar Button TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error

    Read the article

  • How to convert PowerPoint presentations into a Kindle/E-reader friendly form?

    - by Shiki
    I have a lot of documents in .ppt and .pptx (blame the co-workers). I would like to read them on way home or elsewhere... when I have a little time to catch up with things. One thing I could do with the documents is cutting them together into one file. But saving that one even if a smaller version of PDF (according to Office 2010) results in a huge file. And PDF is hardly readable on a Kindle. I would need something .epub free, easy-on-the-device way. Is there such a thing? (Manually I could copy all the images down into native text and whatnot and create new presentations, save those, convert them. But that would just take a lot of time.)

    Read the article

  • 'Binary XML' for game data?

    - by bluescrn
    I'm working on a level editing tool that saves its data as XML. This is ideal during development, as it's painless to make small changes to the data format, and it works nicely with tree-like data. The downside, though, is that the XML files are rather bloated, mostly due to duplication of tag and attribute names. Also due to numeric data taking significantly more space than using native datatypes. A small level could easily end up as 1Mb+. I want to get these sizes down significantly, especially if the system is to be used for a game on the iPhone or other devices with relatively limited memory. The optimal solution, for memory and performance, would be to convert the XML to a binary level format. But I don't want to do this. I want to keep the format fairly flexible. XML makes it very easy to add new attributes to objects, and give them a default value if an old version of the data is loaded. So I want to keep with the hierarchy of nodes, with attributes as name-value pairs. But I need to store this in a more compact format - to remove the massive duplication of tag/attribute names. Maybe also to give attributes native types, so, for example floating-point data is stored as 4 bytes per float, not as a text string. Google/Wikipedia reveal that 'binary XML' is hardly a new problem - it's been solved a number of times already. Has anyone here got experience with any of the existing systems/standards? - are any ideal for games use - with a free, lightweight and cross-platform parser/loader library (C/C++) available? Or should I reinvent this wheel myself? Or am I better off forgetting the ideal, and just compressing my raw .xml data (it should pack well with zip-like compression), and just taking the memory/performance hit on-load?

    Read the article

  • How can I move a load of zone records from a web based system to text based one?

    - by Chris Adams
    Hi there, I have a few domains with Dreamhost where I have set a load of records using their web based domain name system, and I've like to move them to another provider that lets me enter info directly as a text file for their name server, bind 9 to use. (If you're interested, I'm moving them to Gandi.net). Previously when I used a cpanel based system to do something similar, there was a tool that let me simply enter a domain name, and any available domains were automagically entered into a system, saving me typing it myself (and bringing down sites with silly typos in the process). What open source tool can I use to query a domain for all the relevant subdomains and records and list them in a format like a zone file, that I can use with other name servers?

    Read the article

  • Keyboard with normal layout just without numpad?

    - by Pla
    Do you know any keyboard that does not have a numpad and, at the same time, is not a compact keyboard? I type a lot and I enjoy using standard full sized keyboards. I am annoyed by the presence of the numpad. I've never used it; it just wastes desktop space! All I could find are (even more annoying) compact keyboards. Sadly those keyboards are so compact that cram the arrow keys and the page up/down keys in a very little space. So, does anyone know a keyboard with a normal layout but without the numpad?

    Read the article

  • How to power off a hard drive to essentially "hot swap"

    - by Brandon
    I'm looking for either freeware or the programming basics to power off/on a hard drive. Mounting and unmounting a hard drive is simple enough just using the command prompt in Windows XP. Now I need to be able to power down the hard drive so it will not become damaged when being unplugged. I would prefer this to be a simple doable in the command prompt, a simple script, or at worst C++/C#. Freeware that does this exact requirement would also do the job. This script/program will run on Windows XP with .NET 2.0 SP1.

    Read the article

  • Boot Problem in Asus EEE PC 1015CX

    - by Sâmrat VikrãmAdityá
    I am a newbie to Linux world, although I have previously worked on Ubuntu 11.04 for daily use (Net Access and simple recordings using Audacity). I am not sure, at what level I stand as a newbie. I bought this Asus Eee PC two days back. The model is Asus 1015CX. See the specs here http://www.flipkart.com/asus-1015cx-blk011w-laptop-2nd-gen-atom-dual-core-1gb-320gb-linux/p/itmd8qu4quzu8srr . I created a live USB to install 12.10. The usb booted fine. When I clicked "Try Ubuntu" option, it showed me a black screen with a cursor blinking. I waited for 15 minutes and had to restart using the power button. On clicking the "Install Ubuntu" button, the install process went seamlessly. [I have a Windows7 installed on one of the partitions]. i installed it alongside previous windows installation. The system was then rebooted for the first time. It showed the GRUB menu and I selected the first option Ubuntu. After showing the splash screen for a second, it began showing various messages on a black screen and then it struck on "Stopping Save kernel state message". I had to force shut the system using power button. Sometimes it just gives a blank screen with a cursor blinking and on pressing power button, some messages stating that acpid is doing something and stopping services pops up and the system shuts down. I tried booting with "nomodeset" and other parameters as directed in solution to previous such problems on forums. Also Ctrl+Alt+F1,F2,F3,F4,F5,F6..F12 is not doing anything for me anywhere. At installation, I checked Login automatically option. On booting into recovery several options comes up. Clicking resume just gives me a blank screen with cursor blinking. on dropping to root shell and remounting filesystem as RW, I am able to supply some command that worked for others. startx -- Several messages comes up with last one stating Fatal error: No screen found sudo service lightdm start -- Gives a blank screen with a cursor blinking lspci | grep VGA -- Shows some Intel Integrated Graphic... something I don't remember I had reconfigured xserver-xorg, lightdm, reinstalled ubuntu-desktop, unity. What should I do..?? Will going back to 11.04 work..?? Or I should leave all hopes of running Ubuntu on my netbook. Please help.

    Read the article

  • DHCP: server behavior in a two server situation

    - by lang2
    This is a question w.r.t server behavior in the DHCP standard. I've read the RFC and it's still not clear to me. Situation is this: There are two DHCP servers on a network. My client initially get IP address from server A. At some stage, server A goes down. My poor client is sending REQUEST in RENEW and then REBIND state, with no response whatsoever. My questions is: in this situation, should server B response to the REQUEST in REBIND state, e.g. DHCPNAK? Thanks, lang2

    Read the article

  • How many users are "many users"?

    - by kemp
    I need to find a solution for a website which is struggling under load. The site gets ~500 simultaneous connections during peak time, and counts around 42k hits per day. It's a wordpress based site bridged with a vbulletin forum with a lot of contents and a fairly complex structure which makes intensive use of the database. I already implemented code level full page caching (without this the server just crashes), and configured all other caching directives as well as combining css files and the like to limit http requests as much as possible. I need to understand if there is more that can be done via software or if the load is just too much for the server to handle and it needs to be upgraded, because the server goes down occasionally during peak times. Can't access the server now, but it's a dedicated CentOS machine (I think 4GB ram, can't say what CPU) running apache/mysql. So back to the main question: how can I know when the users are just too many?

    Read the article

< Previous Page | 687 688 689 690 691 692 693 694 695 696 697 698  | Next Page >