Search Results

Search found 4291 results on 172 pages for 'cluster analysis'.

Page 90/172 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • List of freely available SEO tools (software) for keyword rank checking? [closed]

    - by Craig
    Possible Duplicate: can anyone reccommend a Google SERP tracker? Requirements: Analysis of site positions on the list of keywords in different search engines; Track keyword positions on search engines. I want see if my keyword rankings have moved up or down; Creating reports. I use Excel + Rank Checker addon for Firefox, to analyze the position of the site in search engines for my keyword list. Are there any tools which tested and working properly. Thanks.

    Read the article

  • Coherence Query Performance in Large Clusters

    - by jpurdy
    Large clusters (measured in terms of the number of storage-enabled members participating in the largest cache services) may introduce challenges when issuing queries. There is no particular cluster size threshold for this, rather a gradually increasing tendency for issues to arise. The most obvious challenges are that a client's perceived query latency will be determined by the slowest responder (more likely to be a factor in larger clusters) as well as the fact that adding additional cache servers will not increase query throughput if the query processing is not compute-bound (which would generally be the case for most indexed queries). If the data set can take advantage of the partition affinity features of Coherence, then the application can use a PartitionedFilter to target a query to a single server (using partition affinity to ensure that all data is in a single partition). If this can not be done, then avoiding an excessive number of cache server JVMs will help, as will ensuring that each cache server has sufficient CPU resources available and is also properly configured to minimize GC pauses (the most common cause of a slow-responding cache server).

    Read the article

  • LinuxCon North America 2014

    - by Chris Kawalek
    As the first day of LinuxCon North America 2014 draws to a close, we want to thank all the folks that stopped by our booth today! If you're at the show, please stop by our booth #204 and have a chat with our experts. And you won't want to miss these Linux and virtualization related sessions coming up tomorrow and Friday: Thursday Aug 21 - Static Analysis in the Linux Kernel Using Smatch - Dan Carpenter, Oracle Friday Aug 22 - The Proper Care of Feeding of MySQL Database for Linux Admins Who Also Have DBA duties - Morgan Tocker, Oracle Friday Aug 22 - Why Use Xen for Large Scale Enterprise Deployments, Konrad Rzeszutek Wilk , Oracle We look forward to meeting with you and we hope you enjoy the rest of LinuxCon North America 2014! 

    Read the article

  • SQL Server Data Tools–BI for Visual Studio 2013 Re-released

    - by Greg Low
    Customers used to complain that the tooling for creating BI projects (Analysis Services MD and Tabular, Reporting Services, and Integration services) has been based on earlier versions of Visual Studio than the ones they were using for their other work in Visual Studio (such as C#, VB, and ASP.NET projects). To alleviate that problem, the shipment of those tools has been decoupled from the shipment of the SQL Server product. In SQL Server 2014, the BI tooling isn’t even included in the released version of SQL Server. This allows the team to keep up-to-date with the releases of Visual Studio. A little while back, I was really pleased to see that the Visual Studio 2013 update for SSDT-BI (SQL Server Data Tools for Business Intelligence) had been released. Unfortunately, they then had to be withdrawn. The good news is that they’re back and you can get the latest version from here: http://www.microsoft.com/en-us/download/details.aspx?id=42313

    Read the article

  • Implementing a Custom Coherence PartitionAssignmentStrategy

    - by jpurdy
    A recent A-Team engagement required the development of a custom PartitionAssignmentStrategy (PAS). By way of background, a PAS is an implementation of a Java interface that controls how a Coherence partitioned cache service assigns partitions (primary and backup copies) across the available set of storage-enabled members. While seemingly straightforward, this is actually a very difficult problem to solve. Traditionally, Coherence used a distributed algorithm spread across the cache servers (and as of Coherence 3.7, this is still the default implementation). With the introduction of the PAS interface, the model of operation was changed so that the logic would run solely in the cache service senior member. Obviously, this makes the development of a custom PAS vastly less complex, and in practice does not introduce a significant single point of failure/bottleneck. Note that Coherence ships with a default PAS implementation but it is not used by default. Further, custom PAS implementations are uncommon (this engagement was the first custom implementation that we know of). The particular implementation mentioned above also faced challenges related to managing multiple backup copies but that won't be discussed here. There were a few challenges that arose during design and implementation: Naive algorithms had an unreasonable upper bound of computational cost. There was significant complexity associated with configurations where the member count varied significantly between physical machines. Most of the complexity of a PAS is related to rebalancing, not initial assignment (which is usually fairly simple). A custom PAS may need to solve several problems simultaneously, such as: Ensuring that each member has a similar number of primary and backup partitions (e.g. each member has the same number of primary and backup partitions) Ensuring that each member carries similar responsibility (e.g. the most heavily loaded member has no more than one partition more than the least loaded). Ensuring that each partition is on the same member as a corresponding local resource (e.g. for applications that use partitioning across message queues, to ensure that each partition is collocated with its corresponding message queue). Ensuring that a given member holds no more than a given number of partitions (e.g. no member has more than 10 partitions) Ensuring that backups are placed far enough away from the primaries (e.g. on a different physical machine or a different blade enclosure) Achieving the above goals while ensuring that partition movement is minimized. These objectives can be even more complicated when the topology of the cluster is irregular. For example, if multiple cluster members may exist on each physical machine, then clearly the possibility exists that at certain points (e.g. following a member failure), the number of members on each machine may vary, in certain cases significantly so. Consider the case where there are three physical machines, with 3, 3 and 9 members each (respectively). This introduces complexity since the backups for the 9 members on the the largest machine must be spread across the other 6 members (to ensure placement on different physical machines), preventing an even distribution. For any given problem like this, there are usually reasonable compromises available, but the key point is that objectives may conflict under extreme (but not at all unlikely) circumstances. The most obvious general purpose partition assignment algorithm (possibly the only general purpose one) is to define a scoring function for a given mapping of partitions to members, and then apply that function to each possible permutation, selecting the most optimal permutation. This would result in N! (factorial) evaluations of the scoring function. This is clearly impractical for all but the smallest values of N (e.g. a partition count in the single digits). It's difficult to prove that more efficient general purpose algorithms don't exist, but the key take away from this is that algorithms will tend to either have exorbitant worst case performance or may fail to find optimal solutions (or both) -- it is very important to be able to show that worst case performance is acceptable. This quickly leads to the conclusion that the problem must be further constrained, perhaps by limiting functionality or by using domain-specific optimizations. Unfortunately, it can be very difficult to design these more focused algorithms. In the specific case mentioned, we constrained the solution space to very small clusters (in terms of machine count) with small partition counts and supported exactly two backup copies, and accepted the fact that partition movement could potentially be significant (preferring to solve that issue through brute force). We then used the out-of-the-box PAS implementation as a fallback, delegating to it for configurations that were not supported by our algorithm. Our experience was that the PAS interface is quite usable, but there are intrinsic challenges to designing PAS implementations that should be very carefully evaluated before committing to that approach.

    Read the article

  • Improving Shopfloor Data Collection with Oracle Manufacturing Operations Center

    Successful factories around the world leverage information to drive their production and supply chains. New tools are available today to further catapult the data collection, analysis, contextualization and collaboration to the various stakeholders involved in the manufacturing process. Oracle Manufacturing Operations Center (MOC) addresses the factory's need for accurate and timely information about product and process quality, insight into shop floor operations, and performance of production assets. It solves the complex problem of connecting fragmented disconnected shop floor data to the business context of your ERP and provides the solid foundation for running Continuous Improvement (CI) programs such as Lean and Six Sigma.

    Read the article

  • Strategies for removing register_globals from a file

    - by Jonathan Rich
    I have a file (or rather, a list of about 100 files) in my website's repository that is still requiring the use of register_globals and other nastiness (like custom error reporting, etc) because the code is so bad, throws notices, and is 100% procedural with few subroutines. We want to move to PHP 5.4 (and eventually 5.5) this year, but can't until we can port these files over, clean them up, etc. The average file length is about 1000 lines. I've already cleaned up a few of the low-hanging fruit, however the job took almost an entire day for 2 300-500 line files. I am in a quagmire here (giggity). Anyway, has anyone else dealt with this in the past? Are there any strategies besides tracing backwards through the code? Most static analysis tools don't look at code outside of functions - are there any that will look at the procedural code and help find at least some of the problems?

    Read the article

  • Is OpenStack suitable as a fault tolerant DB host?

    - by Jit B
    I am trying to design a fault tolerant DB cluster (schema does not matter) that would not require much maintenance. After looking at almost everything from MySQL to MongoDB to HBase I still find that no DB is easily scalable - Cassandra comes close but it has its own set of problems. So I was thinking what if I run something like MySQL or OrientDB on top of a large openstack VM. The VM would be fault tolerant by itself so I dont need to do it st DB level. Is it viable? Has it been done before? If not then what are the possible problems with this approach?

    Read the article

  • SQL Server 2012 Cumulative Update #1 is available!

    - by AaronBertrand
    While I joked earlier this month that SQL Server 2012 Service Pack 1 was released on the same day as General Availability (hey, it's Microsoft's fault since they decided to GA on April 1), this time it isn't a joke. Today Microsoft has released Cumulative Update #1 for SQL Server 2012 . About half of the fixes affect the database engine. Analysis Services and Data Quality Services make up the bulk of the remainder. If you're running SQL Server 2012 now, I suggest you apply the update. This would...(read more)

    Read the article

  • Find Waldo with Mathematica

    - by Jason Fitzpatrick
    If you’re looking for a geeky (and speedy) way to find Waldo, of the Where’s Waldo? fame, this series of Mathematica scripts makes it a snap. Over at Stack Overflow, programmer Arnoud Buzing shares a clever bit of Mathematica-based coding that analyzes a Where’s Waldo? drawing and finds the elusive Waldo. Hit up the link below to see the distinct steps of analysis with accompanying photos. How Do I Find Waldo with Mathematica? [via Make] How To Properly Scan a Photograph (And Get An Even Better Image) The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume Make Your Own Windows 8 Start Button with Zero Memory Usage

    Read the article

  • Consolidating SQL Server Error Logs from Multiple Instances Using SSIS

    SQL Server hides a lot of very useful information in its error log files. Unfortunately, the process of hunting through all these logs, file-by-file, server-by-server, can cause a problem. Rodney Landrum offers a solution which will allow you to pull error log records from multiple servers into a central database, for analysis and reporting with T-SQL....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What software do you use to help plan your team work, and why?

    - by Alex Feinman
    Planning is very difficult. We are not naturally good at estimating our own future, and many cognitive biases exacerbate the problem. Group planning is even harder. Incomplete information, inconsistent views of a situation, and communication problems compound the difficulty. Agile methods provide one framework for organizing group planning--making planning visible to everyone (user stories), breaking it into smaller chunks (sprints), and providing retrospective analysis so you get better at planning. But finding good tools to support these practices is proving tricky. What software tools do you use to achieve these goals? Why are you using that tool? What successes have you had with a particular tool?

    Read the article

  • Moving OVD from Test to Production

    - by mark.wilcox
    Customer asked support "How to move a test OVD server to production". There is a couple of ways to do this. One way is to clone the environment: http://download.oracle.com/docs/cd/E15523_01/core.1111/e10105/testprod.htm#CH... Another way - which is particularly useful if you want to push configuration from a parent OVD server to children in a cluster: http://download.oracle.com/docs/cd/E14571_01/oid.1111/e10046/basic_server_set... Note if you use the second option and you have any data in a Local Store Adapter - you may also need to use the oidcmprec tool to synchronize that data: http://download.oracle.com/docs/cd/E14571_01/oid.1111/e10029/replic_mng_mon.h... Posted via email from Virtual Identity Dialogue

    Read the article

  • What's an acceptable "Avg. Page Load Time"?

    - by hawbsl
    Is there any industry rule of thumb for what's considered an unacceptable load time v. an OK one v. a blistering fast one? We're just reviewing some Google Analytics data and getting 0.74 Avg. Page Load Time reported. I guess that's OK. However it would be good if some meatier comparison data were available, or a blog post, or somewhere where there's some analysis of what speeds are generally being achieved by various kinds of sites. Any useful links to help someone interpret these speeds? If you Google it you just get a lot of results dealing with how to improve your speed. We're not at that stage yet.

    Read the article

  • If I intend to use Hadoop is there a difference in 12.04 LTS 64 Desktop and Server?

    - by Charles Daringer
    Sorry for such a Newbie Question, but I'm looking at installing M3 edition of MapR the requirements are at this link: http://www.mapr.com/doc/display/MapR/Requirements+for+Installation And my question is this, is the Desktop Kernel 64 for 12.04 LTS adequate or the "same" as the Server version of the product? If I'm setting up a lab to attempt to install a home cluster environment should I start with the Server or Dual Boot that distribution? My assumption is that the two are the same. That I can add any additional software to the 64 as needed. Can anyone elaborate on this? Have I missed something obvious?

    Read the article

  • Do you use to third party companies to review your company's code?

    - by CodeToGlory
    I am looking to get the following - Basic code review to make sure they follow the guidelines imposed. Security code analysis to make sure there are no loopholes. No performance bottlenecks by doing a load test etc. We have lot of code coming in from third parties and is becoming laborious to manage code reviews and hence looking to see if others employ such practices. I understand that it may be a concern for some and would raise the question "Well, who is going to make sure the agency is doing their job right?" But basically I am just looking for a third party who can hold all vendor code to the same standards.

    Read the article

  • Apress Books - 3 - Pro ASP.NET 4 CMS (ISBN 987-1-4302-2712-0) - Final comments

    - by TATWORTH
    This book is more than just  a book about an ASP.NET CMS system -  it has much practical advice and examples for the Dot Net web developer. I liked the use of JQuery to detect that JavaScript was not enabled. One chapter was about MemCached - this one chapter could justify the price of the book if you run a server farm and need to improve performance. Some links to get you started are: Windows Memcache at http://code.jellycan.com/memcached/ Dot Net Access Library at http://sourceforge.net/projects/memcacheddotnet/ The chapters on scripting, performance analysis and search engne optimisation all provide excellent examples. This certainly is a book that should be part of every Dot Net Web Development team library. Congratulations to the author and to Apress for publishing this book!

    Read the article

  • Solaris 11.1 changes building of code past the point of __NORETURN

    - by alanc
    While Solaris 11.1 was under development, we started seeing some errors in the builds of the upstream X.Org git master sources, such as: "Display.c", line 65: Function has no return statement : x_io_error_handler "hostx.c", line 341: Function has no return statement : x_io_error_handler from functions that were defined to match a specific callback definition that declared them as returning an int if they did return, but these were calling exit() instead of returning so hadn't listed a return value. These had been generating warnings for years which we'd been ignoring, but X.Org has made enough progress in cleaning up code for compiler warnings and static analysis issues lately, that the community turned up the default error levels, including the gcc flag -Werror=return-type and the equivalent Solaris Studio cc flags -v -errwarn=E_FUNC_HAS_NO_RETURN_STMT, so now these became errors that stopped the build. Yet on Solaris, gcc built this code fine, while Studio errored out. Investigation showed this was due to the Solaris headers, which during Solaris 10 development added a number of annotations to the headers when gcc was being used for the amd64 kernel bringup before the Studio amd64 port was ready. Since Studio did not support the inline form of these annotations at the time, but instead used #pragma for them, the definitions were only present for gcc. To resolve this, I fixed both sides of the problem, so that it would work for building new X.Org sources on older Solaris releases or with older Studio compilers, as well as fixing the general problem before it broke more software building on Solaris. To the X.Org sources, I added the traditional Studio #pragma does_not_return to recognize that functions like exit() don't ever return, in patches such as this Xserver patch. Adding a dummy return statement was ruled out as that introduced unreachable code errors from compilers and analyzers that correctly realized you couldn't reach that code after a return statement. And on the Solaris 11.1 side, I updated the annotation definitions in <sys/ccompile.h> to enable for Studio 12.0 and later compilers the annotations already existing in a number of system headers for functions like exit() and abort(). If you look in that file you'll see the annotations we currently use, though the forms there haven't gone through review to become a Committed interface, so may change in the future. Actually getting this integrated into Solaris though took a bit more work than just editing one header file. Our ELF binary build comparison tool, wsdiff, actually showed a large number of differences in the resulting binaries due to the compiler using this information for branch prediction, code path analysis, and other possible optimizations, so after comparing enough of the disassembly output to be comfortable with the changes, we also made sure to get this in early enough in the release cycle so that it would get plenty of test exposure before the release. It also required updating quite a bit of code to avoid introducing new lint or compiler warnings or errors, and people building applications on top of Solaris 11.1 and later may need to make similar changes if they want to keep their build logs similarly clean. Previously, if you had a function that was declared with a non-void return type, lint and cc would warn if you didn't return a value, even if you called a function like exit() or panic() that ended execution. For instance: #include <stdlib.h> int callback(int status) { if (status == 0) return status; exit(status); } would previously require a never executed return 0; after the exit() to avoid lint warning "function falls off bottom without returning value". Now the compiler & lint will both issue "statement not reached" warnings for a return 0; after the final exit(), allowing (or in some cases, requiring) it to be removed. However, if there is no return statement anywhere in the function, lint will warn that you've declared a function returning a value that never does so, suggesting you can declare it as void. Unfortunately, if your function signature is required to match a certain form, such as in a callback, you not be able to do so, and will need to add a /* LINTED */ to the end of the function. If you need your code to build on both a newer and an older release, then you will either need to #ifdef these unreachable statements, or, to keep your sources common across releases, add to your sources the corresponding #pragma recognized by both current and older compiler versions, such as: #pragma does_not_return(exit) #pragma does_not_return(panic) Hopefully this little extra work is paid for by the compilers & code analyzers being able to better understand your code paths, giving you better optimizations and more accurate errors & warning messages.

    Read the article

  • ETL Software Research Question

    - by WernerCD
    Where I work, we use an in-house ETL solution that's homegrown and has been around for 5-10 years. I'm still new to my data analysis job, but I was wondering about the ETL tools that are out there. This is a new area for me. My situation, and job, is basically digging in a set of databases (DB2, SQL2005, Citrix, Ancient Cobol Database with a SQL Wrapper on top, MySQL, etc). Gather the desired information. combine the different datasets into one set. output into a file of choice (CSV, Tab Separated, Pipe Separated, XLS, etc). FTP to customer. I guess what my real question is, given my job, what are some good ETL suites that I can look at and compare to my in-house tools? This is more to research some other options. Ultimately, I'd either suggest a new solution or get options/ideas to improve our current app.

    Read the article

  • Audio programming resources

    - by rashleighp
    I've been very interested in the last few months about getting in to audio programming (I'm from a musical background). I've been a .NET developer for two years and have also done some objective c for an iPhone app recently. I realise I would probably need to work on my C++ chops and have been having a play around with FMOD EX and doing a lot of research into the industry. I was just wondering if anyone could suggest some good resources for audio programming (be they websites, podcasts, books, videos, online courses etc). Anything from Fourier analysis, low level coding, audio engine creation to audio APIs. I just want to learn as much as possible! Thanks in advance.

    Read the article

  • Team Foundation Server– Debug symbols(pdb files) generated in Release build? Fix it.

    - by Gopinath
    Yesterday I setup TFS for my .NET playground website to implement continuous integration and deployments. After a successful build I noticed that debug symbols(pdb files) were generated even though TFS is configured to build in Release mode.  After a bit of analysis its turned out to be the behavior of TFS to generate debug symbols (pdb files) until we pass the attribute DebugType = None. Here are the steps to pass DebugType parameter to MSBuild of TFS Go to Team Explorer Select Build Defintion >> Edit Build Definition Switch to Process tab Navigate to Advanced Section and locate MSBuild Arguments Add the following: /p:Configuration=Release /p:DebugType=none

    Read the article

  • Comment l'analyse statique moderne peut-elle faciliter la vie des développeurs ? Découvrez la solution de Coverity, utilisée par le CERN

    Webinar : comment l'analyse statique de dernière génération peut-elle faciliter la vie des développeurs ? Découvrez la solution de Coverity, utilisée par le CERN Dans un monde où un bug mineur peut avoir des effets dévastateurs, les outils d'analyse statique d'ancienne génération s'avèrent souvent incapables de détecter les vrais défauts de code, difficiles à identifier. Mais des outils modernes d'analyse statique de code existent, permettent de détecter ces défauts critiques et potentiellement dommageables tôt dans le cycle de développement, permettant ainsi de réduire les coûts, les délais et les risques liés aux erreurs logicielles. Coverity Static Analysis est l'une de...

    Read the article

  • Enhancing Enterprise Planning and Forecasting Through Predictive Modeling

    Planning and forecasting performance in today's volatile economic environment can be challenging with traditional planning applications and manual modeling techniques. To address these challenges, leading edge companies are leveraging predictive modeling to bring statistical analysis and techniques such as Monte Carlo simulations into the mix. Sound too math-intense and complicated? Not anymore. These techniques can be applied by anyone - no prior stats experience required - whether to augment the forecasting performed by line managers or to validate those forecasts based on historical information, and to produce a broader range of scenarios to consider in decision-making.

    Read the article

  • Synthetic database records

    - by michipili
    Assume we are getting some statistics from a customer which we analyse and we send our comments to the customer. Now, the customer tells us that the statistic they computed between January and March are based on a wrong methodology and sends us corrected series. We want perform analysis with the wrong and with the correct set of data, which are huge and only differ from January to March. Therefore, we need something like synthetic database records implementing the following logic: synthetic[1] = wrong_data synthetic[2] = correct_data between Januar and March, wrong_data otherwise With this, we can easily perform our analyses on synthetic records. Should such synthetic records be implemented in the application logic or on the side of the database? What are common pitfalls of such an implementation?

    Read the article

  • Technical Computing

      Today, Microsoft announced our Technical Computing initiative.    Through the Technical Computing initiative, we will enable scientists, engineers and analysts to more easily model the world at much greater fidelity.  The Technical Computing initiative will address a wide range of users.  One of the most critical elements is to help developers create applications that can take advantage of parallelism on their desktop, in a cluster, and in public and private clouds. ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >