Search Results

Search found 19061 results on 763 pages for 'load factor'.

Page 433/763 | < Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >

  • No GUI boot; startx error, I suspect no filesystem corruption.

    - by Dharmaj Soni
    Till yesterday, my Ubuntu 9.10 was working fine. I had watched a movie using vlc. I had also charged my ipod using the laptop. Today, when I started it, I automatically booted into command line. There seems to be no filesystem corruption etc as I can view/open (text) files. Before the CLI appeared, the screen blinked with a cursor, then the white Ubuntu logo flashed, and then I got the CLI login prompt. After logging in, if I try startx, to start gnome, I get the following error after a few seconds: giving up xinit: No such file or directory (errno 2): unable to connect to X server xinit: No such process (errno 3): Server error* The same error comes up, even if I use sudo, or if I change my directory to '/' before using startx, and also when, from the grub, using the recovery mode option to load into CLI, and then trying startx. On trying command 'xinit', I get "Server error" Also, on trying GDM, I get 2 errors. I cannot connect to the internet in this state. Thanks for any help. I am using Dell Inspiron 1440, no special graphics card.

    Read the article

  • ASP.NET High CPU Bringing Servers to their Knees

    - by user880954
    Ok, our new build is having 100% cpu spikes on each server at random intervals. For long durations it make the site totally unresponsive - this will be at peak times as people in different countries log on to the site etc. We've looked at perfmom, memory profilers, CLR profiler, sql profilers, Red gate ants profiler, tried load testing in UAT - but cannot even reproduce the problem. This could mean only thousands of users hitting the live site causes it to happen. One pattern we did notice was that the new code - the broken build - actually uses noticably less threads. We are also using spring for IOC - does this have a bed reputation? To make things worse, we cannot deploy to live due to the business impact - so cannot narrow the problem down to subset of the new features we've added. We truly are destroyed - has anyone got any battle scars that may save us a few lives?

    Read the article

  • BizTalk and SQL: Alternatives to the SQL receive adapter. Using Msmq to receive SQL data

    - by Leonid Ganeline
    If we have to get data from the SQL database, the standard way is to use a receive port with SQL adapter. SQL receive adapter is a solicit-response adapter. It periodically polls the SQL database with queries. That’s only way it can work. Sometimes it is undesirable. With new WCF-SQL adapter we can use the lightweight approach but still with the same principle, the WCF-SQL adapter periodically solicits the database with queries to check for the new records. Imagine the situation when the new records can appear in very broad time limits, some - in a second interval, others - in the several minutes interval. Our requirement is to process the new records ASAP. That means the polling interval should be near the shortest interval between the new records, a second interval. As a result the most of the poll queries would return nothing and would load the database without good reason. If the database is working under heavy payload, it is very undesirable. Do we have other choices? Sure. We can change the polling to the “eventing”. The good news is the SQL server could issue the event in case of new records with triggers. Got a new record –the trigger event is fired. No new records – no the trigger events – no excessive load to the database. The bad news is the SQL Server doesn’t have intrinsic methods to send the event data outside. For example, we would rather use the adapters that do listen for the data and do not solicit. There are several such adapters-listeners as File, Ftp, SOAP, WCF, and Msmq. But the SQL Server doesn’t have methods to create and save files, to consume the Web-services, to create and send messages in the queue, does it? Can we use the File, FTP, Msmq, WCF adapters to get data from SQL code? Yes, we can. The SQL Server 2005 and 2008 have the possibility to use .NET code inside SQL code. See the SQL Integration. How it works for the Msmq, for example: ·         New record is created, trigger is fired ·         Trigger calls the CLR stored procedure and passes the message parameters to it ·         The CLR stored procedure creates message and sends it to the outgoing queue in the SQL Server computer. ·         Msmq service transfers message to the queue in the BizTalk Server computer. ·         WCF-NetMsmq adapter receives the message from this queue. For the File adapter the idea is the same, the CLR stored procedure creates and stores the file with message, and then the File adapter picks up this file. Using WCF-NetMsmq adapter to get data from SQL I am describing the full set of the deployment and development steps for the case with the WCF-NetMsmq adapter. Development: 1.       Create the .NET code: project, class and method to create and send the message to the MSMQ queue. 2.       Create the SQL code in triggers to call the .NET code. Installation and Deployment: 1.       SQL Server: a.       Register the CLR assembly with .NET (CLR) code b.      Install the MSMQ Services 2.       BizTalk Server: a.       Install the MSMQ Services b.      Create the MSMQ queue c.       Create the WCF-NetMsmq receive port. The detailed description is below. Code .NET code … using System.Xml; using System.Xml.Linq; using System.Xml.Serialization;   //namespace MyCompany.MySolution.MyProject – doesn’t work. The assembly name is MyCompany.MySolution.MyProject // I gave up with the compound namespace. Seems the CLR Integration cannot work with it L. Maybe I’m wrong.     public class Event     {         static public XElement CreateMsg(int par1, int par2, int par3)         {             XNamespace ns = "http://schemas.microsoft.com/Sql/2008/05/TypedPolling/my_storedProc";             XElement xdoc =                 new XElement(ns + "TypedPolling",                     new XElement(ns + "TypedPollingResultSet0",                         new XElement(ns + "TypedPollingResultSet0",                             new XElement(ns + "par1", par1),                             new XElement(ns + "par2", par2),                             new XElement(ns + "par3", par3),                         )                     )                 );             return xdoc;         }     }   //////////////////////////////////////////////////////////////////////// … using System.ServiceModel; using System.ServiceModel.Channels; using System.Transactions; using System.Data; using System.Data.Sql; using System.Data.SqlTypes;   public class MsmqHelper {     [Microsoft.SqlServer.Server.SqlProcedure]     // msmqAddress as "net.msmq://localhost/private/myapp.myqueue";     public static void SendMsg(string msmqAddress, string action, int par1, int par2, int par3)     {         using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Suppress))         {             NetMsmqBinding binding = new NetMsmqBinding(NetMsmqSecurityMode.None);             binding.ExactlyOnce = true;             EndpointAddress address = new EndpointAddress(msmqAddress);               using (ChannelFactory<IOutputChannel> factory = new ChannelFactory<IOutputChannel>(binding, address))             {                 IOutputChannel channel = factory.CreateChannel();                 try                 {                     XElement xe = Event.CreateMsg(par1, par2, par3);                     XmlReader xr = xe.CreateReader();                     Message msg = Message.CreateMessage(MessageVersion.Default, action, xr);                     channel.Send(msg);                     //SqlContext.Pipe.Send(…); // to test                 }                 catch (Exception ex)                 { …                 }             }             scope.Complete();         }     }   SQL code in triggers   -- sp_SendMsg was registered as a name of the MsmqHelper.SendMsg() EXEC sp_SendMsg'net.msmq://biztalk_server_name/private/myapp.myqueue', 'Create', @par1, @par2, @par3   Installation and Deployment On the SQL Server Registering the CLR assembly 1.       Prerequisites: .NET 3.5 SP1 Framework. It could be the issue for the production SQL Server! 2.       For more information, please, see the link http://nielsb.wordpress.com/sqlclrwcf/ 3.       Copy files: >copy “\Windows\Microsoft.net\Framework\v3.0\Windows Communication Foundation\Microsoft.Transactions.Bridge.dll” “\Program Files\Reference Assemblies\Microsoft\Framework\v3.0 \Microsoft.Transactions.Bridge.dll” If your machine is a 64-bit, run two commands: >copy “\Windows\Microsoft.net\Framework\v3.0\Windows Communication Foundation\Microsoft.Transactions.Bridge.dll” “\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.0 \Microsoft.Transactions.Bridge.dll” >copy “\Windows\Microsoft.net\Framework64\v3.0\Windows Communication Foundation\Microsoft.Transactions.Bridge.dll” “\Program Files\Reference Assemblies\Microsoft\Framework\v3.0 \Microsoft.Transactions.Bridge.dll” 4.       Execute the SQL code to register the .NET assemblies: -- For x64 OS: CREATE ASSEMBLY SMdiagnostics AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v3.0\Windows Communication Foundation\SMdiagnostics.dll' WITH permission_set = unsafe CREATE ASSEMBLY [System.Web] AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework64\v2.0.50727\System.Web.dll' WITH permission_set = unsafe CREATE ASSEMBLY [System.Messaging] AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Messaging.dll' WITH permission_set = unsafe CREATE ASSEMBLY [System.ServiceModel] AUTHORIZATION dbo FROM 'C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.0\System.ServiceModel.dll' WITH permission_set = unsafe CREATE ASSEMBLY [System.Xml.Linq] AUTHORIZATION dbo FROM 'C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Xml.Linq.dll' WITH permission_set = unsafe   -- For x32 OS: --CREATE ASSEMBLY SMdiagnostics AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v3.0\Windows Communication Foundation\SMdiagnostics.dll' WITH permission_set = unsafe --CREATE ASSEMBLY [System.Web] AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Web.dll' WITH permission_set = unsafe --CREATE ASSEMBLY [System.Messaging] AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Messaging.dll' WITH permission_set = unsafe --CREATE ASSEMBLY [System.ServiceModel] AUTHORIZATION dbo FROM 'C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\System.ServiceModel.dll' WITH permission_set = unsafe 5.       Register the assembly with the external stored procedure: CREATE ASSEMBLY [HelperClass] AUTHORIZATION dbo FROM ’<FilePath>MyCompany.MySolution.MyProject.dll' WITH permission_set = unsafe where the <FilePath> - the path of the file on this machine! 6. Create the external stored procedure CREATE PROCEDURE sp_SendMsg (        @msmqAddress nvarchar(100),        @Action NVARCHAR(50),        @par1 int,        @par2 int,        @par3 int ) AS EXTERNAL NAME HelperClear.MsmqHelper.SendMsg   Installing the MSMQ Services 1.       Check if the MSMQ service is NOT installed. To check:  Start / Administrative Tools / Computer Management, on the left pane open the “Services and Applications”, search to the “Message Queuing”. If you cannot see it, follow next steps. 2.       Start / Control Panel / Programs and Features 3.       Click “Turn Windows Features on or off” 4.       Click Features, click “Add Features” 5.       Scroll down the feature list; open the “Message Queuing” / “Message Queuing Services”; and check the “Message Queuing Server” option  6.       Click Next; Click Install; wait to the successful finish of the installation Creating the MSMQ queue We don’t need to create the queue on the “sender” side. On the BizTalk Server Installing the MSMQ Services The same is as for the SQL Server. Creating the MSMQ queue 1.       Start / Administrative Tools / Computer Management, on the left pane open the “Services and Applications”, open the “Message Queuing”, and open the “Private Queues”. 2.       Right-click the “Private Queues”; choose New; choose “Private Queue”. 3.       Type the Queue name as ’myapp.myqueue'; check the “Transactional” option. Creating the WCF-NetMsmq receive port I will not go through this step in all details. It is straightforward. URI for this receive location should be 'net.msmq://localhost/private/myapp.myqueue'. Notes ·         The biggest problem is usually on the step the “Registering the CLR assembly”. It is hard to predict where are the assemblies from the assembly list, what version should be used, x86 or x64. It is pity of such “rude” integration of the SQL with .NET. ·         In couple cases the new WCF-NetMsmq port was not able to work with the queue. Try to replace the WCF- NetMsmq port with the WCF-Custom port with netMsmqBinding. It was working fine for me. ·         To test how messages go through the queue you can turn on the Journal /Enabled option for the queue. I used the QueueExplorer utility to look to the messages in Journal. The Computer Management can also show the messages but it shows only small part of the message body and in the weird format. The QueueExplorer can do the better job; it shows the whole body and Xml messages are in good color format.

    Read the article

  • How to manage security of these self hosted web apis, to ensure that the request coming for accessing data is authenticated?

    - by Husrat Mehmood
    Let's pretend I am going to work on an enterprise application. Say I have 11 modules in the application and I would have to develop Dashboards for every role in the organization for whom I are going to develop application. We Decided to use Asp.Net Web Api and return json data from our apis. We are going to include 11 Self hosted web apis projects in our application (one self hosted web api) for every module. All 11 modules are connected to one Sql server 2012 Database. Then once api is ready we would have to create Business Dashboards (Based upon roles in Organization). So Now my web api client is Asp.Net Mvc application.Asp.Net mvc will consume those web apis. Here is the part for whom all explanation is done. How should I manage Security of all 11 self hosted web apis? How should I only authenticated request is coming? If I authenticate user by login and password and then redirect user to appropriate Dashboard designed for the role that user have and load data by consuming web apis. How should I ensure that the request coming for accessing data is authenticated?

    Read the article

  • can't run sqldeveloper on Ubuntu

    - by nazar_art
    I tried to install sqldeveloper by following way: Download SQL Developer from Oracle website (I chose Other Platforms download). Extract file to /opt: sudo unzip sqldeveloper-*-no-jre.zip -d /opt/ sudo chmod +x /opt/sqldeveloper/sqldeveloper.sh Linking over an in-path launcher for Oracle SQL Developer: sudo ln -s /opt/sqldeveloper/sqldeveloper.sh /usr/local/bin/sqldeveloper Edit /usr/local/bin/sqldeveloper.sh replace it's content to: #!/bin/bash cd /opt/sqldeveloper/sqldeveloper/bin ./sqldeveloper "$@" Run SQL Developer: sqldeveloper But it shows next output: nazar@lelyak-desktop:/opt/sqldeveloper? ./sqldeveloper.sh Oracle SQL Developer Copyright (c) 1997, 2014, Oracle and/or its affiliates. All rights reserved. LOAD TIME : 401# # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007f3b2dcacbe0, pid=20351, tid=139892273444608 # # JRE version: Java(TM) SE Runtime Environment (7.0_65-b17) (build 1.7.0_65-b17) # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode linux-amd64 compressed oops) # Problematic frame: # C 0x00007f3b2dcacbe0 # # Core dump written. Default location: /opt/sqldeveloper/sqldeveloper/bin/core or core.20351 # # An error report file with more information is saved as: # /tmp/hs_err_pid20351.log # # If you would like to submit a bug report, please visit: # http://bugreport.sun.com/bugreport/crash.jsp # /opt/sqldeveloper/sqldeveloper/bin/../../ide/bin/launcher.sh: line 1193: 20351 Aborted (core dumped) ${JAVA} "${APP_VM_OPTS[@]}" ${APP_ENV_VARS} -classpath ${APP_CLASSPATH} ${APP_MAIN_CLASS} "${APP_APP_OPTS[@]}" 134 nazar@lelyak-desktop:/opt/sqldeveloper? java -version java version "1.7.0_65" Java(TM) SE Runtime Environment (build 1.7.0_65-b17) Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode) Here is content of /tmp/hs_err_pid20351.log How to solve this trouble?

    Read the article

  • Stunnel too many clients

    - by davidsmalley
    I'm trying to hook up stunnel and haproxy to forward https connections through to some backend servers. I've got haproxy setup right, and I seem to have stunnel set up right. Trouble is that when I hit the setup with a load test after a while I start to see these log entries: 2010.05.05 11:24:43 LOG7[3498:3086792368]: https accepted FD=512 from 10.195.158.225:52579 2010.05.05 11:24:43 LOG4[3498:3086792368]: Connection rejected: too many clients (=500) I guess I've hit a limit somewhere but I wasn't sure how to fix it, there doesn't seem to be a config file option for stunnel to change this. Does anyone know how to configure stunnel for a potentially large number of connections?

    Read the article

  • CodeIgniter's Scaffolding not working

    - by 01010011
    Hi, I keep getting a 404 Page Not Found whenever I try to access CodeIgniter's Scaffolding page in my browser, like so: localhost/codeignitor/index.php/blog/scaffolding/mysecretword I can access localhost/codeignitor/index.php/blog just fine. I followed CodeIgnitor's instructions in their "Create a blog in 20 minutes" by storing my database settings in the database.php file; and automatically connecting to the database by inserting "database" in the core array of the autoload.php; and I've added both parent::Controller(); and $this-load-scaffolding('myTableName') to blog's constructor. It still gives me this 404. Any suggestions?

    Read the article

  • Simple web-frontend for remote svn administration?

    - by Stefan Lasiewski
    We run a SVN repository. Some of our more advanced users need to be able to perform some SVN administration without relying on the system administrator. They need to be able to do things like create SVN repositories, delete SVN repositories,, and perform commands like 'svnadmin dump' and 'svnadmin load'. We'd like to avoid SSH access on these FreeBSD machines, and would rather provide a service interface through a Web UI. I'm looking for a simple script (or a small number of scripts) which use Perl or PHP. I found svnadmin or svnadmin.pl, but was hoping to find something with a larger user community or which has been recommended by others. It looks like Trac allows SVN administration, but comes with may more features then we need.

    Read the article

  • Finding a way to simplify complex queries on legacy application

    - by glenatron
    I am working with an existing application built on Rails 3.1/MySql with much of the work taking place in a JavaScript interface, although the actual platforms are not tremendously relevant here, except in that they give context. The application is powerful, handles a reasonable amount of data and works well. As the number of customers using it and the complexity of the projects they create increases, however, we are starting to run into a few performance problems. As far as I can tell, the source of these problems is that the data represents a tree and it is very hard for ActiveRecord to deterministically know what data it should be retrieving. My model has many relationships like this: Project has_many Nodes has_many GlobalConditions Node has_one Parent has_many Nodes has_many WeightingFactors through NodeFactors has_many Tags through NodeTags GlobalCondition has_many Nodes ( referenced by Id, rather than replicating tree ) WeightingFactor has_many Nodes through NodeFactors Tag has_many Nodes through NodeTags The whole system has something in the region of thirty types which optionally hang off one or many nodes in the tree. My question is: What can I do to retrieve and construct this data faster? Having worked a lot with .Net, if I was in a similar situation there, I would look at building up a Stored Procedure to pull everything out of the database in one go but I would prefer to keep my logic in the application and from what I can tell it would be hard to take the queried data and build ActiveRecord objects from it without losing their integrity, which would cause more problems than it solves. It has also occurred to me that I could bunch the data up and send some of it across asynchronously, which would not improve performance but would improve the user perception of performance. However if sections of the data appeared after page load that could also be quite confusing. I am wondering whether it would be a useful strategy to make everything aware of it's parent project, so that one could pull all the records for that project and then build up the relationships later, but given the ubiquity of complex trees in day to day programming life I wouldn't be surprised if there were some better design patterns or standard approaches to this type of situation that I am not well versed in.

    Read the article

  • SQL Saturday Atlanta: Intro To Performance Tuning

    - by Mike Femenella
    I'm looking forward to speaking in Atlanta on the 24th, will be fun to get back down that way to visit with some friends and present two topics that I really enjoy. First, an introduction to performance tuning. Performance tuning is a very wide and deep topic and we're staying close to the surface. I direct this class for newbie sql users who have less than 2 years of experience. It's all the things I wish someone would have told me in my first 2 years about what to look for when the database was slow...or allegedly slow I should say. We'll cover using profiler to find slow performing queries and how to save the data off to a table as well as a tour of other features. The difference between clustered, non clustered and covering indexes. How to look at and understand an execution plan (at a high level) and finally the difference between a temp table and a table variable and what the implications are of using either one in your code. That pretty much takes up a full hour. Second presentation, Loading Data in Real Time. It's really a presentation about partitioning but with a twist that we used at work recently to solve a need to load some data quickly and put it into production with minimal downtime. We'll cover partition functions, schemes,$partition, merge, sys.partitions and show some examples of building a set of partitioned tables and using the switch statement to move it from one table to another. Finally we'll cover the differences in partitioning between 2005 and 2008. Hope to see you there! And if you read my blog please introduce yourself!

    Read the article

  • Cisco 2900 Series Routers worth the price?

    - by NickToyota
    Just wondering if anyone has experience with the cisco 2911 or 2900 series routers? I understand it is newer and similar to the 2811 but more robust. The price difference is not that much more. I am curious as to why. I am trying to determine if I should go with the 29xx or 28xx series for a medium sized company. ISP load balancing and fail over is required. T1 and ADSL lines already in place.

    Read the article

  • installing and running google-chrome on an old Ubuntu 7.10 legacy system

    - by 12632
    I am trying to get google-chrome to work on Ubuntu 7.10. I installed it with --force-depends and got it to install, but now when I try to run it, I get this error: /usr/bin/google-chrome: error while loading shared libraries: libnss3.so: cannot open shared object file: No such file or directory Is there a way to still get google-chrome to load even without this dependency satisfied? This is an old system that needs to keep this old 7.10 Ubuntu version and I would like to have google-chrome if possible installed, even if it means no sound or other features that are not compatible.

    Read the article

  • Godaddy vs. Route53 for DNS

    - by tim peterson
    I have my website set up as an EC2 instance and my DNS is currently Godaddy. I'm considering switching to Amazon AWS Route53 for DNS. The one thing I noticed however is that Route53 charges monthly fees but I never get any bills from Godaddy. Obviously, nobody likes getting charged for something they can get for free. If Godaddy is cheaper, can anyone confirm that the page load speed of an EC2 instance is actually better via Route53 vs. Godaddy? If it is not faster or cheaper, can someone point out other reasons it might make sense to do this switch? thanks, tim

    Read the article

  • 10gR2 Transportable Tablespaces Certified for EBS 11i

    - by Steven Chan
    Database migration across platforms of different "endian" (byte ordering) formats using the Cross Platform Transportable Tablespaces (XTTS) process is now certified for Oracle E-Business Suite Release 11i (11.5.10.2) with Oracle Database 10g Release 2.  This process is sometimes also referred to as transportable tablespaces (TTS).What is the Cross-Platform Transportable Tablespace Feature?The Cross-Platform Transportable Tablespace feature allows users to move a user tablespace across Oracle databases. It's an efficient way to move bulk data between databases. If the source platform and the target platform are of different endianness, then an additional conversion step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.Moving data using transportable tablespaces can be much faster than performing either an export/import or unload/load of the same data. This is because transporting a tablespace only requires the copying of datafiles from source to the destination and then integrating the tablespace structural information. You can also use transportable tablespaces to move both table and index data, thereby avoiding the index rebuilds you would have to perform when importing or loading table data.

    Read the article

  • Gmail firing background tasks in Chrome and Firefox

    - by Keith Bloom
    I am experiencing a strange problem with Gmail running on Vista and it started when WindowsUpdate last patched my computer. Since then when Gmail is open in either Chrome or Safari the pointer changes to the Working in Background state every couple of seconds. You can imagine how distracting this is. I have never used Gmail in IE before so I loaded it up, Gmail wanted to load the GoogleTalk ActiveX control which I declined. Now Gmail in IE, I'm using version 8, didn't fire these background processes so I'm thinking it has something to do with GoogleTalk. Has anyone experienced this problem? Also does anyone know how to disable GoogleTalk in FireFox of Chrome.

    Read the article

  • Why is my system waiting for google-analytics?

    - by mikeroosa
    When I go to certain websites that have google analytics, they never load up. One example is www.jquery.com. I tried in firefox, chrome, safari and explorer. Chrome & safari, flash the home page then it disappears and the screen goes white. Firefox never loads anything and the spinner just keeps spinning. Loads fine in explorer. Any ideas of what may be going on? If I use my laptop, on my same home internet connection, it works fine. This is not happening on all sites with analytics, just certain ones.

    Read the article

  • Why is my system waiting for google-analytics?

    - by mikeroosa
    When I go to certain websites that have google analytics, they never load up. One example is www.jquery.com. I tried in firefox, chrome, safari and explorer. Chrome & safari, flash the home page then it disappears and the screen goes white. Firefox never loads anything and the spinner just keeps spinning. Loads fine in explorer. Any ideas of what may be going on? If I use my laptop, on my same home internet connection, it works fine. This is not happening on all sites with analytics, just certain ones.

    Read the article

  • Is there a usable HTML web gallery template for Lightroom 2?

    - by Johannes Rössel
    The built-in HTML gallery unfortunately generates invalid HTML markup so its looks are likely more coincidence than proper design. That aside its JavaScript usage prevents using the middle mouse button to open an image in a new tab. A look at the few free HTML gallery templates out there shows me that there seems to be little interest in building an actual HTML gallery which doesn't use JavaScript. Most of them are using JS to build fancy animations or in-page popup boxes containing the image. This may be nice for some people but at least one person who looks at my images is on dial-up and quickly opening all images in individual tabs to let them load and then close the connection is very nice to have there. Anyone seen such a gallery template; i. e. simple, plain HTML without JS? My HTML/CSS skills aren't that good that I can develop such a thing on my own, unfortunately.

    Read the article

  • Does disable log error for MySQL increasing it's performance ? How disable it?

    - by adnan
    Does disable log error for MySQL increasing it's performance ? How disable it ? This is my service status Server load 0.63 (8 CPUs) Memory Used 23.38% (957,600 of 4,096,000) Swap Used 0% (0 of 1) And this is print screen for process manager http://elnhrda.com/promgr.jpg This is my.cnf [mysqld] query_cache_size=64M skip-name-resolve #innodb_file_per_table=1 query_cache_limit=2M read_buffer_size = 2M read_rnd_buffer_size = 16M sort_buffer_size = 8M join_buffer_size = 8M thread_cache_size = 8 thread_concurrency = 8 innodb_buffer_pool_size = 2G Iam looking for doing any thing to increase my website speed I have VPS 4G.B RAM CENTOS 6 X86_64 Note please : this statics taken now which no any queries executed & site have not any visitors in the same time

    Read the article

  • how to solve error after installing IIS 7 on win 32

    - by raman
    I have installed IIS 7 on my desktop, but when I run the IIS shortcut and click on connect to local host or server, I get the following error. Failed to connect There was an error when trying to connect. Do you want to retype your credentials and try again? Details: Could not load file or assembly 'Microsoft.Web.Administration, Version=7.0.0.0 Culture=neutral, PublicKeyToken=31bf3856ad364e35` or one of its dependencies. The system cannot find the file specified. What do I do to solve this?

    Read the article

  • Why use FQDN as DNS-server option in DHCP?

    - by Filip Haglund
    I've seen multiple default configurations of DHCP-servers with a FQDN set as the DNS-server option. Doesn't this imply a catch-22, or the need for that DNS-server to be in the hosts file of every single client? example from dhcp3-server in debian 6: option domain-name-servers ns1.internal.example.org; I can see how using a dns name is convenient because it's only an A-record to change, and they can be load balanced if wanted, but I don't see how the client is going to resolve the name. Why are people using FQDN's as DNS-server addresses in DHCP?

    Read the article

  • Partition table correpted in windows 7 machine as simple dynamic partition

    - by raki
    I have Windows 7 installed in my system. When I originally partitioned it I made 3 partitions and 16 gb space as unallocated. Later I tried to create a partition using this free space using diskmanagement tool. It shows free space as unusabe space and the only one option available is to make it as a simple partition. Unfortunately I made it as simple partition and all my partitions converted to simple dynamic partition. Now after reboot the OS is not loading. I tried to reinstall the OS by formatting the C drive, but it doesn't work. Now I can't load the OS properly. How can I install Windows 7 on my system without losing my data on the other two drives?

    Read the article

  • Augmenting your Social Efforts via Data as a Service (DaaS)

    - by Mike Stiles
    The following is the 3rd in a series of posts on the value of leveraging social data across your enterprise by Oracle VP Product Development Don Springer and Oracle Cloud Data and Insight Service Sr. Director Product Management Niraj Deo. In this post, we will discuss the approach and value of integrating additional “public” data via a cloud-based Data-as-as-Service platform (or DaaS) to augment your Socially Enabled Big Data Analytics and CX Management. Let’s assume you have a functional Social-CRM platform in place. You are now successfully and continuously listening and learning from your customers and key constituents in Social Media, you are identifying relevant posts and following up with direct engagement where warranted (both 1:1, 1:community, 1:all), and you are starting to integrate signals for communication into your appropriate Customer Experience (CX) Management systems as well as insights for analysis in your business intelligence application. What is the next step? Augmenting Social Data with other Public Data for More Advanced Analytics When we say advanced analytics, we are talking about understanding causality and correlation from a wide variety, volume and velocity of data to Key Performance Indicators (KPI) to achieve and optimize business value. And in some cases, to predict future performance to make appropriate course corrections and change the outcome to your advantage while you can. The data to acquire, process and analyze this is very nuanced: It can vary across structured, semi-structured, and unstructured data It can span across content, profile, and communities of profiles data It is increasingly public, curated and user generated The key is not just getting the data, but making it value-added data and using it to help discover the insights to connect to and improve your KPIs. As we spend time working with our larger customers on advanced analytics, we have seen a need arise for more business applications to have the ability to ingest and use “quality” curated, social, transactional reference data and corresponding insights. The challenge for the enterprise has been getting this data inline into an easily accessible system and providing the contextual integration of the underlying data enriched with insights to be exported into the enterprise’s business applications. The following diagram shows the requirements for this next generation data and insights service or (DaaS): Some quick points on these requirements: Public Data, which in this context is about Common Business Entities, such as - Customers, Suppliers, Partners, Competitors (all are organizations) Contacts, Consumers, Employees (all are people) Products, Brands This data can be broadly categorized incrementally as - Base Utility data (address, industry classification) Public Master Reference data (trade style, hierarchy) Social/Web data (News, Feeds, Graph) Transactional Data generated by enterprise process, workflows etc. This Data has traits of high-volume, variety, velocity etc., and the technology needed to efficiently integrate this data for your needs includes - Change management of Public Reference Data across all categories Applied Big Data to extract statics as well as real-time insights Knowledge Diagnostics and Data Mining As you consider how to deploy this solution, many of our customers will be using an online “cloud” service that provides quality data and insights uniformly to all their necessary applications. In addition, they are requesting a service that is: Agile and Easy to Use: Applications integrated with the service can obtain data on-demand, quickly and simply Cost-effective: Pre-integrated into applications so customers don’t have to Has High Data Quality: Single point access to reference data for data quality and linkages to transactional, curated and social data Supports Data Governance: Becomes more manageable and cost-effective since control of data privacy and compliance can be enforced in a centralized place Data-as-a-Service (DaaS) Just as the cloud has transformed and now offers a better path for how an enterprise manages its IT from their infrastructure, platform, and software (IaaS, PaaS, and SaaS), the next step is data (DaaS). Over the last 3 years, we have seen the market begin to offer a cloud-based data service and gain initial traction. On one side of the DaaS continuum, we see an “appliance” type of service that provides a single, reliable source of accurate business data plus social information about accounts, leads, contacts, etc. On the other side of the continuum we see more of an online market “exchange” approach where ISVs and Data Publishers can publish and sell premium datasets within the exchange, with the exchange providing a rich set of web interfaces to improve the ease of data integration. Why the difference? It depends on the provider’s philosophy on how fast the rate of commoditization of certain data types will occur. How do you decide the best approach? Our perspective, as shown in the diagram below, is that the enterprise should develop an elastic schema to support multi-domain applicability. This allows the enterprise to take the most flexible approach to harness the speed and breadth of public data to achieve value. The key tenet of the proposed approach is that an enterprise carefully federates common utility, master reference data end points, mobility considerations and content processing, so that they are pervasively available. One way you may already be familiar with this approach is in how you do Address Verification treatments for accounts, contacts etc. If you design and revise this service in such a way that it is also easily available to social analytic needs, you could extend this to launch geo-location based social use cases (marketing, sales etc.). Our fundamental belief is that value-added data achieved through enrichment with specialized algorithms, as well as applying business “know-how” to weight-factor KPIs based on innovative combinations across an ever-increasing variety, volume and velocity of data, will be where real value is achieved. Essentially, Data-as-a-Service becomes a single entry point for the ever-increasing richness and volume of public data, with enrichment and combined capabilities to extract and integrate the right data from the right sources with the right factoring at the right time for faster decision-making and action within your core business applications. As more data becomes available (and in many cases commoditized), this value-added data processing approach will provide you with ongoing competitive advantage. Let’s look at a quick example of creating a master reference relationship that could be used as an input for a variety of your already existing business applications. In phase 1, a simple master relationship is achieved between a company (e.g. General Motors) and a variety of car brands’ social insights. The reference data allows for easy sort, export and integration into a set of CRM use cases for analytics, sales and marketing CRM. In phase 2, as you create more data relationships (e.g. competitors, contacts, other brands) to have broader and deeper references (social profiles, social meta-data) for more use cases across CRM, HCM, SRM, etc. This is just the tip of the iceberg, as the amount of master reference relationships is constrained only by your imagination and the availability of quality curated data you have to work with. DaaS is just now emerging onto the marketplace as the next step in cloud transformation. For some of you, this may be the first you have heard about it. Let us know if you have questions, or perspectives. In the meantime, we will continue to share insights as we can.Photo: Erik Araujo, stock.xchng

    Read the article

  • Fault tolerance with a pair of tightly coupled services

    - by cogitor
    I have two tightly coupled services that can run on completely different nodes (e.g. ServiceA and ServiceB). If I start up another replicated copy of both these services for backup purposes (ServiceA-2 and ServiceB-2), what would be the best way of setting up a fault tolerant distributed system such that on a fault in any of the tightly coupled services ServiceA or ServiceB the whole communication should go through backup ServiceA-2 and ServiceB-2? Overall, all the communication should go either through both services or their backup replicas. |---- Service A | | Service B | | (backup branch - used only on fault in Service A or B) ---- Service A-2 | Service B-2 Note that in case that Service A goes down, data from Service B would be incorrect (and vice versa). Load balancing between the primary and backup branch is also not feasible.

    Read the article

  • Disk / system configuration for log collection / syslog server

    - by Konrads
    I am looking into building a syslog / logging infrastructure and am pondering about some architecture best practices. Essentially, I see that a syslog system needs to support two conflicting workloads: log collection. Potentially massive streams of data need to be written quickly to disks and indexed. log querying. logs will be queried by both fixed fields such as date and source as well as text search. What is the best disk/system setup assuming I'd like to keep it to a single server for now? Should I use SSDs or ramdisk to off-load some processing? some disks in stripe and some in raid5? I am particularly eyeing Graylog2 with ElasticSearch/MongoDB

    Read the article

< Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >