Search Results

Search found 31586 results on 1264 pages for 'custom result'.

Page 431/1264 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • Off-Page Factors That Affect Search Optimization

    Following the discussion of the on-page factors that affect search engine optimization in one of my recent writings, I deem it fit to also discuss the off-page factors that can be of great importance to your search engine optimization campaign. Though many people will not readily agree that something like these exist; but I want to make it known categorically that these less considered factors are also very vital to achieving result in your internet marketing business.

    Read the article

  • Code testing practice

    - by Robin Castlin
    So now I have come to the conclusion like many others that having some way of constantly testing your code is good practice since it enables fewer people to be involved (colleges and customers alike) by simply knowing what's wrong before someone else finds out the hard way. I've heard and read some about Unit Testing and understand what it's supposed to do and all. The there are so many different types of bugs. It can be everything from web browser not being able not being able to send correct values, javascript failing, a global function messing up a piece of code somewhere to a change that looked good when testing it out but fails in some special case which was hard to anticipate. My simply finding these errors I learn to rarely repeat them again, but there seems to always be new bugs to be found and learnt from. I would guess maybe the best practice would be to run every page and it's functions a couple of times, witness the result and repeat this in Firefox, Chrome and Internet Explorer (and all smartphones apparently) to make sure it works as intended. However this would take quite some time to do consider I don't work with patches/versions and do little fixes here and there a couple of times per week. What I prefer would be some kind of page I can just load that tests as much things as possible to make sure the site works as intended. Basicly just run a lot of cURL's with POST-values and see if I get expected result. But how would I preferably not increase the IDs of every mysql rows if I delete these testing rows? It feels silly to be on ID 1000 with maybe 50 rows in total. If I could build a new project from scratch I would probably implement some kind of smooth way to return a "TRUE" on testing instead of the actual page. But this solution would for the moment being have to be passed on existing projects. My question What would you recommend to be the best way to test my site to make sure that existing functions does their job upon editing the code? Should I consider to implement a lot of edits first, then test manually the entire code to make sure it still works? Is there any nice way of testing codes without "hurting" the ID columns? Extra thoughs Would it be a good idea to associate all of my files to the different parts of my site which they affect? For instance if I edit home.php I will through documentation test if my homepage's start works as intended since it's the only part of my site it should affect.

    Read the article

  • Determine Better Coding Practice

    - by footprint.
    As a new programmer, it has always been hard to create applications, because I am still at the learning stage. I understand that to achieve a particular affect or function in an application, there will be numerous ways to achieve the same result. However, should I just purely create a function to it's working state, which means that as long as it works, just as the way I want it to, then it should be fine. Can any fellow programmers of a higher level kindly let me know the right way of doing things?

    Read the article

  • How can we unify business goals and technical goals?

    - by BAM
    Some background I work at a small startup: 4 devs, 1 designer, and 2 non-technical co-founders, one who provides funding, and the other who handles day-to-day management and sales. Our company produces mobile apps for target industries, and we've gotten a lot of lucky breaks lately. The outlook is good, and we're confident we can make this thing work. One reason is our product development team. Everyone on the team is passionate, driven, and has a great sense of what makes an awesome product. As a result, we've built some beautiful applications that we're all proud of. The other reason is the co-founders. Both have a brilliant business sense (one actually founded a multi-million dollar company already), and they have close ties in many of the industries we're trying to penetrate. Consequently, they've brought in some great business and continue to keep jobs in the pipeline. The problem The problem we can't seem to shake is how to bring these two awesome advantages together. On the business side, there is a huge pressure to deliver as fast as possible as much as possible, whereas on the development side there is pressure to take your time, come up with the right solution, and pay attention to all the details. Lately these two sides have been butting heads a lot. Developers are demanding quality while managers are demanding quantity. How can we handle this? Both sides are correct. We can't survive as a company if we build terrible applications, but we also can't survive if we don't sell enough. So how should we go about making compromises? Things we've done with little or no success: Work more (well, it did result in better quality and faster delivery, but the dev team has never been more stressed out before) Charge more (as a startup, we don't yet have the credibility to justify higher prices, so no one is willing to pay) Extend deadlines (if we charge the same, but take longer, we'll end up losing money) Things we've done with some success: Sacrifice pay to cut costs (everyone, from devs to management, is paid less than they could be making elsewhere. In return, however, we all have creative input and more flexibility and freedom, a typical startup trade off) Standardize project management (we recently started adhering to agile/scrum principles so we can base deadlines on actual velocity, not just arbitrary guesses) Hire more people (we used to have 2 developers and no designers, which really limited our bandwidth. However, as a startup we can only afford to hire a few extra people.) Is there anything we're missing or doing wrong? How is this handled at successful companies? Thanks in advance for any feedback :)

    Read the article

  • How does the fstab 'defaults' option work? Is relatime recommended?

    - by hushs
    I know the fstab defaults option means this: rw,suid,dev,exec,auto,nouser,async. But what if I want to add one more option, for example relatime, should I still add defaults too or they are applied anyway? Is it needed to add at least one option? Some examples: 1. UUID=bfb42838-d866-4233-9679-96e7536356df /media/data ext3 defaults 0 2 2. UUID=bfb42838-d866-4233-9679-96e7536356df /media/data ext3 0 2 3. UUID=bfb42838-d866-4233-9679-96e7536356df /media/data ext3 defaults,relatime 0 2 4. UUID=bfb42838-d866-4233-9679-96e7536356df /media/data ext3 relatime 0 2 Is the (2) correct(no option at all)? Are the (1) and (2) the same? Are the (3) and (4) the same? Furthermore, I read in the Ubuntu Community Documentation that in Ubuntu 8.04 relatime was used as default for linux native file systems. Is it still true for 12.04? If yes, then why do I see this if I use the mount command: /dev/sda2 on / type ext4 (rw,errors=remount-ro) If no, why not? It isn't recommended to use relatime now? I just wanted to apply it to my non system partitions, it is a good idea? EDIT: I found an other command to list the mounted partitions and their options: cat /proc/mounts This is the result of a partition mounted with the defaults option in fstab: /dev/sdb2 /media/adat ext3 rw,relatime,errors=continue,barrier=1,data=ordered 0 0 This is the output of mount for the same partition: /dev/sdb2 on /media/adat type ext3 (rw) And here is both result if the same partition mounted from Nautilus as a non-root user: /dev/sdb2 /media/adat ext3 rw,nosuid,nodev,relatime,errors=continue,barrier=1,data=ordered 0 0 /dev/sdb2 on /media/adat type ext3 (rw,nosuid,nodev,uhelper=udisks) So it looks like relatime is used if we mount an ext partition in 12.04. So it is unneeded to add it manually. So my problem is broadly solved. But I still can't see why the options that should be in the defaults are not listed even with the cat /proc/mounts. Maybe there is a third and even better method to list the partition mount options :)

    Read the article

  • Quicker Loading of Your Website Ensures Improved SE Rankings

    Now that Google has released news regarding the inclusion of website's speed as one of the factors responsible for website search engine ranking, it is obvious that most website owners would get want delve into matter, a bit more deep. According to the statement, it is clear that the websites having lower loading speed would have less probability of making their way to the search engine result pages than those that load up faster.

    Read the article

  • How to execute a script as super user first checking the user and getting pass from askpass if not super user

    - by pahnin
    thers a similar question out there How can I determine whether a shellscript runs as root or not? I have the same doubt with different result Is it possible to, within the BASH script prior to everything being run, check if the script is being run as superuser, and if not, print a message saying You must be superuser to use this script, then subsequently get pass from the user using askpass or something like tht then execute the saem script as superuser?

    Read the article

  • How are people using virtualisation with SQL Server? Part 1

    - by GavinPayneUK
    Allow me to share with you the results of a survey I did about how people are using virtualisation with SQL Server. This is a three part series of articles sharing the results. In December 2010 I did a 10 question survey to see how people were using virtualisation with SQL Server and if they were changing their deployment and management processes as a result. 21 people responded from at least Europe, the US and Asia Pac. It’s not a massive number but it was large enough to see some trends and some...(read more)

    Read the article

  • TopComponent, Node, Lookup, Palette, and Visual Library

    - by Geertjan
    Here's a small example that puts together several pieces in the context of a NetBeans Platform application, i.e., TopComponent, Node, Lookup, Palette, and Visual Library: http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.2/misc/CensusDesigner The result is a drag-and-drop user interface, i.e., drag items from the palette and drop them onto the window, that's all it does, nothing too fancy, just puts the basic NetBeans Platform pieces together in a pretty standard combination:

    Read the article

  • Of what significance is the solution to the game of checkers in AI research?

    - by cobie
    I have been doing some research into artificial intelligence and I came across a 2007 paper titled "Checkers is Solved" on the game of checkers being solved by AI techniques after more than 16 years of trial. A solution to the game is defined by the team as "determining the final result in a game with no mistakes made by either player". The search for a solution started back in 1989 and it was finally found in 2007. Of what importance is this to the field of AI?

    Read the article

  • 12.04 display resolution changes after each reboot

    - by user72469
    I am running 12.04-64bit on a Dell xps15z and the resolution changes after each reboot to 1024x768 .As a result my desktop occupies just 3/4 of my entire screen...I have to manually adjust it through the display option to the max 1366x768 in order for the desktop to occupy the whole display!If you add this to the "system problem detected" that pops up every day I cannot hide my dissatisfaction and frustration about 12.04...

    Read the article

  • Adding multiple data importers support to web applications

    - by DigiMortal
    I’m building web application for customer and there is requirement that users must be able to import data in different formats. Today we will support XLSX and ODF as import formats and some other formats are waiting. I wanted to be able to add new importers on the fly so I don’t have to deploy web application again when I add new importer or change some existing one. In this posting I will show you how to build generic importers support to your web application. Importer interface All importers we use must have something in common so we can easily detect them. To keep things simple I will use interface here. public interface IMyImporter {     string[] SupportedFileExtensions { get; }     ImportResult Import(Stream fileStream, string fileExtension); } Our interface has the following members: SupportedFileExtensions – string array of file extensions that importer supports. This property helps us find out what import formats are available and which importer to use with given format. Import – method that does the actual importing work. Besides file we give in as stream we also give file extension so importer can decide how to handle the file. It is enough to get started. When building real importers I am sure you will switch over to abstract base class. Importer class Here is sample importer that imports data from Excel and Word documents. Importer class with no implementation details looks like this: public class MyOpenXmlImporter : IMyImporter {     public string[] SupportedFileExtensions     {         get { return new[] { "xlsx", "docx" }; }     }     public ImportResult Import(Stream fileStream, string extension)     {         // ...     } } Finding supported import formats in web application Now we have importers created and it’s time to add them to web application. Usually we have one page or ASP.NET MVC controller where we need importers. To this page or controller we add the following method that uses reflection to find all classes that implement our IMyImporter interface. private static string[] GetImporterFileExtensions() {     var types = from a in AppDomain.CurrentDomain.GetAssemblies()                 from t in a.GetTypes()                 where t.GetInterfaces().Contains(typeof(IMyImporter))                 select t;       var extensions = new Collection<string>();     foreach (var type in types)     {         var instance = (IMyImporter)type.InvokeMember(null,                        BindingFlags.CreateInstance, null, null, null);           foreach (var extension in instance.SupportedFileExtensions)         {             if (extensions.Contains(extension))                 continue;               extensions.Add(extension);         }     }       return extensions.ToArray(); } This code doesn’t look nice and is far from optimal but it works for us now. It is possible to improve performance of web application if we cache extensions and their corresponding types to some static dictionary. We have to fill it only once because our application is restarted when something changes in bin folder. Finding importer by extension When user uploads file we need to detect the extension of file and find the importer that supports given extension. We add another method to our page or controller that uses reflection to return us importer instance or null if extension is not supported. private static IMyImporter GetImporterForExtension(string extensionToFind) {     var types = from a in AppDomain.CurrentDomain.GetAssemblies()                 from t in a.GetTypes()                 where t.GetInterfaces().Contains(typeof(IMyImporter))                 select t;     foreach (var type in types)     {         var instance = (IMyImporter)type.InvokeMember(null,                        BindingFlags.CreateInstance, null, null, null);           if (instance.SupportedFileExtensions.Contains(extensionToFind))         {             return instance;         }     }       return null; } Here is example ASP.NET MVC controller action that accepts uploaded file, finds importer that can handle file and imports data. Again, this is sample code I kept minimal to better illustrate how things work. public ActionResult Import(MyImporterModel model) {     var file = Request.Files[0];     var extension = Path.GetExtension(file.FileName).ToLower();     var importer = GetImporterForExtension(extension.Substring(1));     var result = importer.Import(file.InputStream, extension);     if (result.Errors.Count > 0)     {         foreach (var error in result.Errors)             ModelState.AddModelError("file", error);           return Import();     }     return RedirectToAction("Index"); } Conclusion That’s it. Using couple of ugly methods and one simple interface we were able to add importers support to our web application. Example code here is not perfect but it works. It is possible to cache mappings between file extensions and importer types to some static variable because changing of these mappings means that something is changed in bin folder of web application and web application is restarted in this case anyway.

    Read the article

  • Change the shape of body dynamically

    - by user45491
    I have a problem where i have a ballon which i need to continuously inflate and defalte in update method, I have tried to used setScaleCenter but it is not giving desired result. Below is a code i am trying to make work scale += (float) ((dist-lastDist)/lastDist); Log.d("pinch","scale is "+scale); Log.d("pinch","change in scale is "+(float) ((lastDist-dist)/lastDist)); player.setScaleCenter(scale, scale); player.setScale(scale);

    Read the article

  • Making Your Website Rank Better

    Using a website in promoting whatever business is one of the most effective way in reaching and getting more potential clients to know and buy your product or service. Search Engine Optimization will help you in driving more traffic to your website that most likely will result in increase sales.

    Read the article

  • Building a Store Locator ASP.NET Application Using Google Maps API

    The past couple of projects I've been working on have included the use of the Google Maps API and geocoding service in websites for various reasons. I decided to tie together some of the lessons learned, build an ASP.NETstore locator demo, and write about it on 4Guys. Last week I published the first article in what I think will be a three-part series: Building a Store Locator ASP.NET Application Using Google Maps (Part 1). Part 1 walks through creating a demo where a user can type in an address and any stores within a (roughly) 15 mile area will be displayed in a grid.The article begins with a look at the database used to power the store locator (namely, a single table that contains one row for every location, with each location storing its store number, address, and, most important, latitude and longitude coordinates) and then turns to usingGoogle's geocoding service to translatea user-entered address into latitude and longitude coordinates. The latitude and longitude coordinates are used to find nearby stores, which are then displayed in a grid. Part 2 looks at enhancing the search results to include a map with markers indicating the position of each nearby store location. The Google Maps API, along with a bit of client-side script and server-side logic, make this actually pretty straightforward and easy to implement. Here's a screen shot of the improved store locator results. Part 3, which I plan on publishing next week, looks at how to enhance the map by using information windows to display address information when clicking a marker. Additionally, I'll show how to use custom icons for the markers so that instead of having the same marker for each nearby location the markers will be images numbered 1, 2, 3, and so on, which will correspond to a number assigned to each search result in the grid. The idea here is that by numbering the search results in the grid and the markers on the map visitors will quickly be able to see what marker corresponds to what search result. This article and demo has been a lot of fun to write and create, and I hope you enjoy reading it, too. Building a Store Locator ASP.NET Application Using Google Maps API (Part 1) Building a Store Locator ASP.NET Application Using Google Maps API (Part 2) Happy Programming!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SEO Tips - Avoiding 3 Common Mistakes

    Some of the most effective SEO tips are also some of the easiest to implement and are often overlooked. An effectively optimized site can increase the amount of free and targeted search engine traffic you receive. Read on to see 3 search engine optimization tips that are easy to follow and will result in more free traffic to your site.

    Read the article

  • In C and C++, what methods can prevent accidental use of the assignment(=) where equivalence(==) is needed?

    - by DeveloperDon
    In C and C++, it is very easy to write the following code with a serious error. char responseChar = getchar(); int confirmExit = 'y' == tolower(responseChar); if (confirmExit = 1) { exit(0); } The error is that the if statement should have been: if (confirmExit == 1) As coded, it will exit every time, because the assignment of the confirmExit variable occurs, then confirmExit is used as the result of the expression. Are there good ways to prevent this kind of error?

    Read the article

  • Retrieving SQL Server Fixed Database Roles for Disaster Recovery

    We ran into a case recently where we had the logins and users scripted out on my SQL Server instances, but we didn't have the fixed database roles for a critical database. As a result, our recovery efforts were only partially successful. We ended up trying to figure out what the database role memberships were for that database we recovered but we'd like not to be in that situation again. Is there an easy way to do this?

    Read the article

  • Look to Buy Relevant Backlinks

    Why do you see websites online; primarily because it needs to reach people whom it is directed to. Now what is the point if it fails to reach the people effectively? The purpose of marketing itself is to make sure that it is result oriented. So here is your chance to make sure that your website is doing well on the internet.

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >