Search Results

Search found 1341 results on 54 pages for 'factor mystic'.

Page 14/54 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • 3 Basic Tips to Research Keywords For SEO Campaign

    As far as the search engine optimization (SEO) on-page factors are concerned, keyword research is the first and important factor for the success of any SEO campaigns. Now let's discuss the three basics to research and shortlist an effective keyword list for a SEO campaign.

    Read the article

  • A Begginner's Guide to SEO - The Basics

    n the world of Internet marketing, one important factor that will either make or break your bank is SEO. Just a short introduction about this famous acronym SEO, it is actually an acronym for search engine optimization.

    Read the article

  • The Perfect Link Building

    We've all heard that building links is the most important factor in getting high search positions, but how do you get that perfect link. This article will show you how to obtain the best links possible.

    Read the article

  • Your Site Speed and You

    You may have already heard that site speed is now a factor in your website's search engine rankings - especially for Google. In this article, I'm going to attempt to identify the who, what, when, where, why, and how to improve for your website. Wish me luck!

    Read the article

  • Should You Bother to Create Meta Tags For Your Website?

    Some people will tell you that since Google no longer uses meta tags as a factor in your search engine placement that it is no longer necessary to include them in your website SEO. It is true that the Big G is top dog in the search engine game, and it is true that having meta tags will not do anything for your SERP with them, but what about the rest of the search engines?

    Read the article

  • The Relationship Between PageRank and Backlinks

    Search engine optimisation (SEO) is the science of making the most of a website in terms of attracting new visitors and potential clients. SEO can be divided into on-page optimisation and off-page optimisation. On page optimisation relates to content within the website. Off-page optimisation involves increasing the total value of backlinks to the website. PageRank is the most important factor in search engine positioning.

    Read the article

  • Ranking High in the Search Engines Using Back Links

    If I were to say that there was one thing that out ranks every thing else in importance when it comes to search engine optimization, I'd bet that most people would not believe me. There is so much information and advice available on line from sources such as, forums and blogs etc, that in my belief focus on the less important aspects of seo, but in my five years experience as an internet marketer, I can say with certainty that the most significant factor is: Links.

    Read the article

  • SEO Work For Small Business - The Importance of Prioritising This

    Prioritising your search engine optimisation (SEO) work is a decisive factor that will lead to the success of your small business. Even if SEO is just part of your entire marketing plan, it still has enormous significance as it is the one that generates traffic to your website. This traffic is where you will be able to get prospects, who will eventually be converted into clients.

    Read the article

  • From the Tips Box: Waterproof Boomboxes, Quick Access Laptop Stats, and Stockpiling Free Free Apps and Books

    - by Jason Fitzpatrick
    Once a week we round up some great reader tips and share them with everyone. This week we’re looking at building a waterproof boombox, quick access to laptop stats in Windows 7, and how to stockpile free apps and books at Amazon. How to Banish Duplicate Photos with VisiPic How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It?

    Read the article

  • How do I disable MEDIUM and WEAK/LOW strength ciphers in Apache + mod_ssl?

    - by superwormy
    A PCI Compliance scan has suggested that we disable Apache's MEDIUM and LOW/WEAK strength ciphers for security. Can someone tell me how to disable these ciphers? Apache v2.2.14 mod_ssl v2.2.14 This is what they've told us: Synopsis : The remote service supports the use of medium strength SSL ciphers. Description : The remote host supports the use of SSL ciphers that offer medium strength encryption, which we currently regard as those with key lengths at least 56 bits and less than 112 bits. Solution: Reconfigure the affected application if possible to avoid use of medium strength ciphers. Risk Factor: Medium / CVSS Base Score : 5.0 (CVSS2#AV:N/AC:L/Au:N/C:P/I:N/A:N) [More] Synopsis : The remote service supports the use of weak SSL ciphers. Description : The remote host supports the use of SSL ciphers that offer either weak encryption or no encryption at all. See also : http://www.openssl.org/docs/apps/ciphers .html Solution: Reconfigure the affected application if possible to avoid use of weak ciphers. Risk Factor: Medium / CVSS Base Score : 5.0 (CVSS2#AV:N/AC:L/Au:N/C:P/I:N/A:N) [More]

    Read the article

  • FreeBSD slow transfers - RFC 1323 scaling issue?

    - by Trey
    I think I may be having an issue with window scaling (RFC 1323) and am hoping that someone can enlighten me on what's going on. Server: FreeBSD 9, apache22, serving a static 100MB zip file. 192.168.18.30 Client: Mac OS X 10.6, Firefox 192.168.17.47 Network: Only a switch between them - the subnet is 192.168.16/22 (In this test, I also have dummynet filtering simulating an 80ms ping time on all IP traffic. I've seen nearly identical traces with a "real" setup, with real internet traffic/latency also) Questions: Does this look normal? Is packet #2 specifying a window size of 65535 and a scale of 512? Is packet #5 then shrinking the window size so it can use the 512 scale and still keep the overall calculated window size near 64K? Why is the window scale so high? Here are the first 6 packets from wireshark. For packets 5 and 6 I've included the details showing the window size and scaling factor being used for the data transfer. Code: No. Time Source Destination Protocol Length Info 108 6.699922 192.168.17.47 192.168.18.30 TCP 78 49190 http [SYN] Seq=0 Win=65535 Len=0 MSS=1460 WS=8 TSval=945617489 TSecr=0 SACK_PERM=1 115 6.781971 192.168.18.30 192.168.17.47 TCP 74 http 49190 [SYN, ACK] Seq=0 Ack=1 Win=65535 Len=0 MSS=1460 WS=512 SACK_PERM=1 TSval=2617517338 TSecr=945617489 116 6.782218 192.168.17.47 192.168.18.30 TCP 66 49190 http [ACK] Seq=1 Ack=1 Win=524280 Len=0 TSval=945617490 TSecr=2617517338 117 6.782220 192.168.17.47 192.168.18.30 HTTP 490 GET /utils/speedtest/large.file.zip HTTP/1.1 118 6.867070 192.168.18.30 192.168.17.47 TCP 375 [TCP segment of a reassembled PDU] Details: Transmission Control Protocol, Src Port: http (80), Dst Port: 49190 (49190), Seq: 1, Ack: 425, Len: 309 Source port: http (80) Destination port: 49190 (49190) [Stream index: 4] Sequence number: 1 (relative sequence number) [Next sequence number: 310 (relative sequence number)] Acknowledgement number: 425 (relative ack number) Header length: 32 bytes Flags: 0x018 (PSH, ACK) Window size value: 130 [Calculated window size: 66560] [Window size scaling factor: 512] Checksum: 0xd182 [validation disabled] Options: (12 bytes) No-Operation (NOP) No-Operation (NOP) Timestamps: TSval 2617517423, TSecr 945617490 [SEQ/ACK analysis] TCP segment data (309 bytes) Note: originally posted http://forums.freebsd.org/showthread.php?t=32552

    Read the article

  • Looking for a recommendation on measuring a high availability app that is using a CDN.

    - by T Reddy
    I work for a Fortune 500 company that struggles with accurately measuring performance and availability for high availability applications (i.e., apps that are up 99.5% with 5 seconds page to page navigation). We factor in both scheduled and unscheduled downtime to determine this availability number. However, we recently added a CDN into the mix, which kind of complicates our metrics a bit. The CDN now handles about 75% of our traffic, while sending the remainder to our own servers. We attempt to measure what we call a "true user experience" (i.e., our testing scripts emulate a typical user clicking through the application.) These monitoring scripts sit outside of our network, which means we're hitting the CDN about 75% of the time. Management has decided that we take the worst case scenario to measure availability. So if our origin servers are having problems, but yet the CDN is serving content just fine, we still take a hit on availability. The same is true the other way around. My thought is that as long as the "user experience" is successful, we should not unnecessarily punish ourselves. After all, a CDN is there to improve performance and availability! I'm just wondering if anyone has any knowledge of how other Fortune 500 companies calculate their availability numbers? I look at apple.com, for instance, of a storefront that uses a CDN that never seems to be down (unless there is about to be a major product announcement.) It would be great to have some hard, factual data because I don't believe that we need to unnecessarily hurt ourselves on these metrics. We are making business decisions based on these numbers. I can say, however, given that these metrics are visible to management, issues get addressed and resolved pretty fast (read: we cut through the red-tape pretty quick.) Unfortunately, as a developer, I don't want management to think that the application is up or down because some external factor (i.e., CDN) is influencing the numbers. Thoughts? (I mistakenly posted this question on StackOverflow, sorry in advance for the cross-post)

    Read the article

  • Prevent nginx from redirecting traffic from https to http when used as a reverse proxy

    - by Chris Pratt
    Here's my abbreviated nginx vhost conf: upstream gunicorn { server 127.0.0.1:8080 fail_timeout=0; } server { listen 80; listen 443 ssl; server_name domain.com ~^.+\.domain\.com$; location / { try_files $uri @proxy; } location @proxy { proxy_pass_header Server; proxy_redirect off; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 10; proxy_read_timeout 120; proxy_pass http://gunicorn; } } The same server needs to serve both HTTP and HTTPS, however, when the upstream issues a redirect (for instance, after a form is processed), all HTTPS requests are redirected to HTTP. The only thing I have found that will correct this issue is changing proxy_redirect to the following: proxy_redirect http:// https://; That works wonderfully for requests coming from HTTPS, but if a redirect is issued over HTTP it also redirects that to HTTPS, which is a problem. Out of desperation, I tried: if ($scheme = 'https') { proxy_redirect http:// https://; } But nginx complains that proxy_redirect isn't allowed here. The only other option I can think of is to define the two servers separately and set proxy_redirect only on the SSL one, but then I would have duplicate the rest of the conf (there's a lot in the server directive that I omitted for simplicity sake). I know I could also use an include directive to factor out the redundancy, but I really want to keep just one conf file without any dependencies. So, first, is there something I'm missing that will negate the problem entirely? Or, second, if not, is there any other way (besides including an external file) to factor out the redundant config information so that I can separate out the HTTP and HTTPS versions of the server config?

    Read the article

  • Server configration for our website [duplicate]

    - by Varun Varunesh
    This question already has an answer here: Can you help me with my capacity planning? 2 answers We are a start-up and 6 month back we have launched our beta version website. Now we are in a phase of building our website and web-services for the final product. This website will be based on PHP, Python, MySql database and with wamp server. Right now in the beta version we are using Azure VM for hosting, with configuration of 786MB RAM and Shared CPU. We have 200 avg users daily coming to our website. Now we are trying to increase the number of users from 200 to 1500 daily users. And I am thinking our server should have capability to handle at least 100 concurrent user. Also we have developed web-services for our mobile-apps. Which can also increase loads on the sever. So here are the question that takes me here, I am pretty much confused about whether to go with shared hosting or VM based hosting. If VM, then what configuration will be best for our requirement (as I discussed above) ? Currently our VM is a Windows based server and its very simple to manage, So other than cost factor why should I go for Linux based sever? What other factor should I keep in mind while choosing the server as per our requirement ?

    Read the article

  • SetSel on EN_SETFOCUS or WM_SETFOCUS doesn't work

    - by Coder
    I've ran into next mystic thing in Winapi/MFC, I have an edit box, contents of which I have to select on Tab, Lclick, Rclick, Mclick and so on. The sort of obvious path is to handle the SETFOCUS message and call SetSel(0, -1), which should select all text. But it doesn't work! What's wrong? I tried googling, everyone seems to override Lclilks or handle SetSel in parent windows, but this is wrong from encapsulation point of view, also multiple clicks (user wants to insert something in the middle of the text) will break, and so on. Why isn't my approach working, I tried like 10 different ways, tried to trap all possible focus messages, looked up info on MSDN, but nothing works as expected. Also, I need to recreate the carret on focus, which also doesn't seem to work. SETFOCUS message gets trapped alright. If I add __asm int 3, it breaks every time. It's the create carret and setsel that gets swallowed it seems.

    Read the article

  • How can I fix my program from crashing in C++?

    - by Rachel
    I'm very new to programming and I am trying to write a program that adds and subtracts polynomials. My program sometimes works, but most of the time, it randomly crashes and I have no idea why. It's very buggy and has other problems I'm trying to fix, but I am unable to really get any further coding done since it crashes. I'm completely new here but any help would be greatly appreciated. Here's the code: #include <iostream> #include <cstdlib> using namespace std; int getChoice(void); class Polynomial10 { private: double* coef; int degreePoly; public: Polynomial10(int max); //Constructor for a new Polynomial10 int getDegree(){return degreePoly;}; void print(); //Print the polynomial in standard form void read(); //Read a polynomial from the user void add(const Polynomial10& pol); //Add a polynomial void multc(double factor); //Multiply the poly by scalar void subtract(const Polynomial10& pol); //Subtract polynom }; void Polynomial10::read() { cout << "Enter degree of a polynom between 1 and 10 : "; cin >> degreePoly; cout << "Enter space separated coefficients starting from highest degree" << endl; for (int i = 0; i <= degreePoly; i++) { cin >> coef[i]; } } void Polynomial10::print() { for(int i=0;i<=degreePoly;i++) { if(coef[i] == 0) { cout << ""; } else if(i>=0) { if(coef[i] > 0 && i!=0) { cout<<"+"; } if((coef[i] != 1 && coef[i] != -1) || i == degreePoly) { cout << coef[i]; } if((coef[i] != 1 && coef[i] != -1) && i!=degreePoly ) { cout << "*"; } if (i != degreePoly && coef[i] == -1) { cout << "-"; } if(i != degreePoly) { cout << "x"; } if ((degreePoly - i) != 1 && i != degreePoly) { cout << "^"; cout << degreePoly-i; } } } } void Polynomial10::add(const Polynomial10& pol) { for(int i = 0; i<degreePoly; i++) { int degree = degreePoly; coef[degreePoly-i] += pol.coef[degreePoly-(i+1)]; } } void Polynomial10::subtract(const Polynomial10& pol) { for(int i = 0; i<degreePoly; i++) { coef[degreePoly-i] -= pol.coef[degreePoly-(i+1)]; } } void Polynomial10::multc(double factor) { //int degreePoly=0; //double coef[degreePoly]; cout << "Enter the scalar multiplier : "; cin >> factor; for(int i = 0; i<degreePoly; i++) { coef[i] *= factor; } }; Polynomial10::Polynomial10(int max) { degreePoly=max; coef = new double[degreePoly]; for(int i; i<degreePoly; i++) { coef[i] = 0; } } int main() { int choice; Polynomial10 p1(1),p2(1); cout << endl << "CGS 2421: The Polynomial10 Class" << endl << endl << endl; cout << "0. Quit\n" << "1. Enter polynomial\n" << "2. Print polynomial\n" << "3. Add another polynomial\n" << "4. Subtract another polynomial\n" << "5. Multiply by scalar\n\n"; int choiceFirst = getChoice(); if (choiceFirst != 1) { cout << "Enter a Polynomial first!"; } if (choiceFirst == 1) {choiceFirst = choice;} while(choice != 0) { switch(choice) { case 0: return 0; case 1: p1.read(); break; case 2: p1.print(); break; case 3: p2.read(); p1.add(p2); cout << "Updated Polynomial: "; p1.print(); break; case 4: p2.read(); p1.subtract(p2); cout << "Updated Polynomial: "; p1.print(); break; case 5: p1.multc(10); cout << "Updated Polynomial: "; p1.print(); break; } choice = getChoice(); } return 0; } int getChoice(void) { int c; cout << "\nEnter your choice : "; cin >> c; return c; }

    Read the article

  • How to deal with databases for websites written in Java, more specifically Wicket?

    - by John
    Hi there. I'm new to website development using Java but I've got started with Wicket and make a little website. I'd like to expand on what I've already made (a website with a form, labels and links) and implement database connectivity. I've looked at a couple of examples, in example Mystic Paste, and I see that they're using Hibernate and Spring. I've never touched Hibernate or Spring before and to be honest the heavy use of annotations scare me a little bit as I haven't really made use of them before, with the exception of supressing warnings and overriding. At this point I have one Connection object which I set up in the WebApplication class upon initialization. I then retrieve this connection object whenever I need to perform queries. I don't know if this is a bad approach or not for a production web application. All help is greatly appreciated.

    Read the article

  • Performance Optimization &ndash; It Is Faster When You Can Measure It

    - by Alois Kraus
    Performance optimization in bigger systems is hard because the measured numbers can vary greatly depending on the measurement method of your choice. To measure execution timing of specific methods in your application you usually use Time Measurement Method Potential Pitfalls Stopwatch Most accurate method on recent processors. Internally it uses the RDTSC instruction. Since the counter is processor specific you can get greatly different values when your thread is scheduled to another core or the core goes into a power saving mode. But things do change luckily: Intel's Designer's vol3b, section 16.11.1 "16.11.1 Invariant TSC The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource." DateTime.Now Good but it has only a resolution of 16ms which can be not enough if you want more accuracy.   Reporting Method Potential Pitfalls Console.WriteLine Ok if not called too often. Debug.Print Are you really measuring performance with Debug Builds? Shame on you. Trace.WriteLine Better but you need to plug in some good output listener like a trace file. But be aware that the first time you call this method it will read your app.config and deserialize your system.diagnostics section which does also take time.   In general it is a good idea to use some tracing library which does measure the timing for you and you only need to decorate some methods with tracing so you can later verify if something has changed for the better or worse. In my previous article I did compare measuring performance with quantum mechanics. This analogy does work surprising well. When you measure a quantum system there is a lower limit how accurately you can measure something. The Heisenberg uncertainty relation does tell us that you cannot measure of a quantum system the impulse and location of a particle at the same time with infinite accuracy. For programmers the two variables are execution time and memory allocations. If you try to measure the timings of all methods in your application you will need to store them somewhere. The fastest storage space besides the CPU cache is the memory. But if your timing values do consume all available memory there is no memory left for the actual application to run. On the other hand if you try to record all memory allocations of your application you will also need to store the data somewhere. This will cost you memory and execution time. These constraints are always there and regardless how good the marketing of tool vendors for performance and memory profilers are: Any measurement will disturb the system in a non predictable way. Commercial tool vendors will tell you they do calculate this overhead and subtract it from the measured values to give you the most accurate values but in reality it is not entirely true. After falling into the trap to trust the profiler timings several times I have got into the habit to Measure with a profiler to get an idea where potential bottlenecks are. Measure again with tracing only the specific methods to check if this method is really worth optimizing. Optimize it Measure again. Be surprised that your optimization has made things worse. Think harder Implement something that really works. Measure again Finished! - Or look for the next bottleneck. Recently I have looked into issues with serialization performance. For serialization DataContractSerializer was used and I was not sure if XML is really the most optimal wire format. After looking around I have found protobuf-net which uses Googles Protocol Buffer format which is a compact binary serialization format. What is good for Google should be good for us. A small sample app to check out performance was a matter of minutes: using ProtoBuf; using System; using System.Diagnostics; using System.IO; using System.Reflection; using System.Runtime.Serialization; [DataContract, Serializable] class Data { [DataMember(Order=1)] public int IntValue { get; set; } [DataMember(Order = 2)] public string StringValue { get; set; } [DataMember(Order = 3)] public bool IsActivated { get; set; } [DataMember(Order = 4)] public BindingFlags Flags { get; set; } } class Program { static MemoryStream _Stream = new MemoryStream(); static MemoryStream Stream { get { _Stream.Position = 0; _Stream.SetLength(0); return _Stream; } } static void Main(string[] args) { DataContractSerializer ser = new DataContractSerializer(typeof(Data)); Data data = new Data { IntValue = 100, IsActivated = true, StringValue = "Hi this is a small string value to check if serialization does work as expected" }; var sw = Stopwatch.StartNew(); int Runs = 1000 * 1000; for (int i = 0; i < Runs; i++) { //ser.WriteObject(Stream, data); Serializer.Serialize<Data>(Stream, data); } sw.Stop(); Console.WriteLine("Did take {0:N0}ms for {1:N0} objects", sw.Elapsed.TotalMilliseconds, Runs); Console.ReadLine(); } } The results are indeed promising: Serializer Time in ms N objects protobuf-net   807 1000000 DataContract 4402 1000000 Nearly a factor 5 faster and a much more compact wire format. Lets use it! After switching over to protbuf-net the transfered wire data has dropped by a factor two (good) and the performance has worsened by nearly a factor two. How is that possible? We have measured it? Protobuf-net is much faster! As it turns out protobuf-net is faster but it has a cost: For the first time a type is de/serialized it does use some very smart code-gen which does not come for free. Lets try to measure this one by setting of our performance test app the Runs value not to one million but to 1. Serializer Time in ms N objects protobuf-net 85 1 DataContract 24 1 The code-gen overhead is significant and can take up to 200ms for more complex types. The break even point where the code-gen cost is amortized by its faster serialization performance is (assuming small objects) somewhere between 20.000-40.000 serialized objects. As it turned out my specific scenario involved about 100 types and 1000 serializations in total. That explains why the good old DataContractSerializer is not so easy to take out of business. The final approach I ended up was to reduce the number of types and to serialize primitive types via BinaryWriter directly which turned out to be a pretty good alternative. It sounded good until I measured again and found that my optimizations so far do not help much. After looking more deeper at the profiling data I did found that one of the 1000 calls did take 50% of the time. So how do I find out which call it was? Normal profilers do fail short at this discipline. A (totally undeserved) relatively unknown profiler is SpeedTrace which does unlike normal profilers create traces of your applications by instrumenting your IL code at runtime. This way you can look at the full call stack of the one slow serializer call to find out if this stack was something special. Unfortunately the call stack showed nothing special. But luckily I have my own tracing as well and I could see that the slow serializer call did happen during the serialization of a bool value. When you encounter after much analysis something unreasonable you cannot explain it then the chances are good that your thread was suspended by the garbage collector. If there is a problem with excessive GCs remains to be investigated but so far the serialization performance seems to be mostly ok.  When you do profile a complex system with many interconnected processes you can never be sure that the timings you just did measure are accurate at all. Some process might be hitting the disc slowing things down for all other processes for some seconds as well. There is a big difference between warm and cold startup. If you restart all processes you can basically forget the first run because of the OS disc cache, JIT and GCs make the measured timings very flexible. When you are in need of a random number generator you should measure cold startup times of a sufficiently complex system. After the first run you can try again getting different and much lower numbers. Now try again at least two times to get some feeling how stable the numbers are. Oh and try to do the same thing the next day. It might be that the bottleneck you found yesterday is gone today. Thanks to GC and other random stuff it can become pretty hard to find stuff worth optimizing if no big bottlenecks except bloatloads of code are left anymore. When I have found a spot worth optimizing I do make the code changes and do measure again to check if something has changed. If it has got slower and I am certain that my change should have made it faster I can blame the GC again. The thing is that if you optimize stuff and you allocate less objects the GC times will shift to some other location. If you are unlucky it will make your faster working code slower because you see now GCs at times where none were before. This is where the stuff does get really tricky. A safe escape hatch is to create a repro of the slow code in an isolated application so you can change things fast in a reliable manner. Then the normal profilers do also start working again. As Vance Morrison does point out it is much more complex to profile a system against the wall clock compared to optimize for CPU time. The reason is that for wall clock time analysis you need to understand how your system does work and which threads (if you have not one but perhaps 20) are causing a visible delay to the end user and which threads can wait a long time without affecting the user experience at all. Next time: Commercial profiler shootout.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >