Search Results

Search found 14824 results on 593 pages for 'replication services'.

Page 404/593 | < Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >

  • Combining cache methods - memcache/disk based

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • Hadoop reduce task gets hung

    - by user806098
    I set up a hadoop cluster with 4 nodes, When running a map-reduce task, the map task finishes quickly, while the reduce task hangs at 27% percent. I checked the log, it's that the reduce task fails to fetch map output from map nodes. The job tracker log of master shows messages like this: 2011-06-27 19:55:14,748 INFO org.apache.hadoop.mapred.JobTracker: Adding task (REDUCE) 'attempt_201106271953_0001_r_000000_0' to tip task_201106271953_0001_r_000000, for tracker 'tracker_web30.bbn.com.cn:localhost/127.0.0.1:56476' And the name node log of master shows messages like this: 2011-06-27 14:00:52,898 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 54310, call register(DatanodeRegistration(202.106.199.39:50010, storageID=DS-1989397900-202.106.199.39-50010-1308723051262, infoPort=50075, ipcPort=50020)) from 192.168.225.19:16129: error: java.io.IOException: verifyNodeRegistration: unknown datanode 202.106.199.3 9:50010 However, neither the "web30.bbn.com.cn" or 202.106.199.39, 202.106.199.3 is the slave node. I think such ip/domains appear because hadoop fails to resolve a node(first in the Intranet DNS server), then it goes to a higher-level DNS server, later to the top, still fails, then the "junk" ip/domains are returned. But I checked my config, it goes like this: /etc/hosts: 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 192.168.225.16 master 192.168.225.66 slave1 192.168.225.20 slave5 192.168.225.17 slave17 conf/core-site.xml: hadoop.tmp.dir /root/hadoop_tmp/hadoop_${user.name} fs.default.name hdfs://master:54310 io.sort.mb 1024 hdfs-site.xml: dfs.replication 3 masters: master slaves: master slave1 slave5 slave17 Also, all firewalls(iptables) are turned off, and ssh between each 2 nodes is ok. so I don't know where exact the error comes from. Please help. Thanks a lot.

    Read the article

  • Commercial Website architecture question

    - by Maxime ARNSTAMM
    Hello everyone, I have to write an architecture case study but there are some things that i don't know, so i'd like some pointers on the following : The website must handle 5k simultaneous users. The backend is composed by a commercial software, some webservices, some message queues, and a database. I want to recommend to use Spring for the backend, to deal with the different elements, and to expose some Rest services. I also want to recommend wicket for the front (not the point here). What i don't know is : must i install the front and the back on the same tomcat server or two different ? and i am tempted to put two servers for the front, with a load balancer (no need for session replication in this case). But if i have two front servers, must i have two back servers ? i don't want to create some kind of bottleneck. Based on what i read on this blog a really huge charge is handle by one tomcat only for the first website mentionned. But i cannot find any info on this, so i can't tell if it seems plausible. If you can enlight me, so i can go on in my case study, that would be really helpful. Thanks :)

    Read the article

  • How can I automatically generate sql update scripts when some data is updated ?

    - by Brann
    I'd like to automatically generate an update script each time a value is modified in my database. In other words, if a stored procedure, or a query, or whatever updates column a with value b in table c (which as a pk column (i,j...k), I want to generate this : update c set a=b where i=... and j=... and k=... and store it somewhere (for example as a raw string in a table). To complicate things, I want the script to be generated only if the update has been made by a specific user. Good news is that I've got a primary key defined for all my tables. I can see how to do this using a trigger, but I would need to generate specific triggers for each table, and to update them each and every-time my schema changes. I guess there are some built-in ways to do this as SQL server sometimes need to store this kind of things (while using transactional replication for example), but couldn't find anything so far ... any ideas ? I'm also interested in ways to automatically generate triggers (probably using triggers - meta triggers, huh? - since I will need to update triggers automatically when the schema change)

    Read the article

  • Using memory-based cache together with conventional cache

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • Using memcache together with conventional cache

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • How to format the node_redis info function output?

    - by hh54188
    I want check the Redis info on my pc with node, so I use node_redis and run the info function: var redis = require("redis"), client = redis.createClient(); client.on("connect", function () { client.info(function (err, replay) { console.log(replay); }) }) but the response is un-format: `#Server\r\nredis_version:2.6.16\r\nredis_git_sha1:00000000\r\nredis_git_dirty:0\r\nredis_mode:standalone\r\nos:Linux 3.8.0-29-generic x86_64\r\narch_bits:64\r\nmultiplexing_api:epoll\r\ngcc_version:4.6.3\r\nprocess_id:2941\r\nrun_id:e60f261a6f4f6f081563a47961315eff6b1c005d\r\ntcp_port:6379\r\nuptime_in_seconds:1777\r\nuptime_in_days:0\r\nhz:10\r\nlru_clock:2040689\r\n\r\n# Clients\r\nconnected_clients:2\r\nclient_longest_output_list:0\r\nclient_biggest_input_buf:0\r\nblocked_clients:0\r\n\r\n# Memory\r\nused_memory:562584\r\nused_memory_human:549.40K\r\nused_memory_rss:2031616\r\nused_memory_peak:561784\r\nused_memory_peak_human:548.62K\r\nused_memory_lua:31744\r\nmem_fragmentation_ratio:3.61\r\nmem_allocator:jemalloc-3.2.0\r\n\r\n# Persistence\r\nloading:0\r\nrdb_changes_since_last_save:0\r\nrdb_bgsave_in_progress:0\r\nrdb_last_save_time:1383553917\r\nrdb_last_bgsave_status:ok\r\nrdb_last_bgsave_time_sec:-1\r\nrdb_current_bgsave_time_sec:-1\r\naof_enabled:0\r\naof_rewrite_in_progress:0\r\naof_rewrite_scheduled:0\r\naof_last_rewrite_time_sec:-1\r\naof_current_rewrite_time_sec:-1\r\naof_last_bgrewrite_status:ok\r\n\r\n# Stats\r\ntotal_connections_received:3\r\ntotal_commands_processed:5\r\ninstantaneous_ops_per_sec:0\r\nrejected_connections:0\r\nexpired_keys:0\r\nevicted_keys:0\r\nkeyspace_hits:0\r\nkeyspace_misses:0\r\npubsub_channels:0\r\npubsub_patterns:0\r\nlatest_fork_usec:0\r\n\r\n# Replication\r\nrole:master\r\nconnected_slaves:0\r\n\r\n# CPU\r\nused_cpu_sys:0.13\r\nused_cpu_user:0.19\r\nused_cpu_sys_children:0.00\r\nused_cpu_user_children:0.00\r\n\r\n# Keyspace\r\n' How can I turn it to an object? like: { redis_version:2.6.16, redis_git_sha1:00000000, redis_git_dirty:0, ...... } so that I can read each property's value, get information I need

    Read the article

  • Oracle - UPSERT with update not executed for unmodified values

    - by Buthrakaur
    I'm using following update or insert Oracle statement at the moment: BEGIN UPDATE DSMS SET SURNAME = :SURNAME, FIRSTNAME = :FIRSTNAME, VALID = :VALID WHERE DSM = :DSM; IF (SQL%ROWCOUNT = 0) THEN INSERT INTO DSMS (DSM, SURNAME, FIRSTNAME, VALID) VALUES (:DSM, :SURNAME, :FIRSTNAME, :VALID); END IF; END; This runs fine except that the update statement performs dummy update if the data is same as the parameter values provided. I would not mind the dummy update in normal situation, but there's a replication/synchronization system build over this table using triggers on tables to capture updated records and executing this statement frequently for many records simply means that I'd cause huge traffic in triggers and the sync system. Is there any simple method how to reformulate this code that the update statement wouldn't update record if not necessary without using following IF-EXISTS check code which I find not sleek enough and maybe also not most efficient for this task? DECLARE CNT NUMBER; BEGIN SELECT COUNT(1) INTO CNT FROM DSMS WHERE DSM = :DSM; IF SQL%FOUND THEN UPDATE DSMS SET SURNAME = :SURNAME, FIRSTNAME = :FIRSTNAME, VALID = :VALID WHERE DSM = :DSM AND (SURNAME != :SURNAME OR FIRSTNAME != :FIRSTNAME OR VALID != :VALID); ELSE INSERT INTO DSMS (DSM, SURNAME, FIRSTNAME, VALID) VALUES (:DSM, :SURNAME, :FIRSTNAME, :VALID); END IF; END;

    Read the article

  • jboss cache as hibernate 2nd level - cluster node doesn't persist replicated data

    - by Sergey Grashchenko
    I'm trying to build an architecture basically described in user guide http://www.jboss.org/file-access/default/members/jbosscache/freezone/docs/3.2.1.GA/userguide_en/html/cache_loaders.html#d0e3090 (Replicated caches with each cache having its own store.) but having jboss cache configured as hibernate second level cache. I've read manual for several days and played with the settings but could not achieve the result - the data in memory (jboss cache) gets replicated across the hosts, but it's not persisted in the datasource/database of the target (not original) cluster host. I had a hope that a node might become persistent at eviction, so I've got a cache listener and attached it to @NoveEvicted event. I found that though I could adjust eviction policy to fully control it, no any persistence takes place. Then I had a though that I could try to modify CacheLoader to set "passivate" to true, but I found that in my case (hibernate 2nd level cache) I don't have a way to access a loader. I wonder if replicated data persistence is possible at all by configuration tuning ? If not, will it work for me to create some manual peristence in CacheListener (I could check whether the eviction event is local, and if not - persist it to hibernate datasource somehow) ? I've used mvcc-entity configuration with the modification of cacheMode - set to REPL_ASYNC. I've also played with the eviction policy configuration. Last thing to mention is that I've tested entty persistence and replication in project that has been generated with Seam. I guess it's not important though.

    Read the article

  • Consolidate multiple site files into single location

    - by seengee
    We have a custom PHP/MySQL CMS running on Linux/Apache thats rolled out to multiple sites (20+) on the same server. Each site uses exactly the same CMS files with a few files for each site being customised. The customised files for each site are: /library/mysql_connect.php /public_html/css/* /public_html/ftparea/* /public_html/images/* There's also a couple of other random files inside /public_html/includes/ that are unique to each site. Other than this each site on the server uses the exact same files. Each site sitting within /home/username/. There is obviously a massive amount of replication here as each time we want to deploy a system update we need to update to each user account. Given the common site files are all stored in SVN it would make far more sense if we were able to simply commit to SVN and deploy to a single location direct from there. Unfortunately, making a major architecture change at this stage could be problematic. In my mind the ideal scenario would mean creating an account like /home/commonfiles/ and each site using these common files unless an account specific file exists, for example a request is made to /home/user/public_html/index.php but as this file doesnt exist the request is then redirected to /home/commonfiles/public_html/index.php. I know that generally this approach is possible, similar to how Zend Framework (and probably others) redirect all requests that dont match a specific file to index.php. I'm just not sure about how exactly to go about implementing it and whether its actually advisable. Would really welcome any input/ideas people have got. Thanks.

    Read the article

  • Is it better to use a relational database or document-based database for an app like Wufoo?

    - by mboyle
    I'm working on an application that's similar to Wufoo in that it allows our users to create their own databases and collect/present records with auto generated forms and views. Since every user is creating a different schema (one user might have a database of their baseball card collection, another might have a database of their recipes) our current approach is using MySQL to create separate databases for every user with its own tables. So in other words, the databases our MySQL server contains look like: main-web-app-db (our web app containing tables for users account info, billing, etc) user_1_db (baseball_cards_table) user_2_db (recipes_table) .... And so on. If a user wants to set up a new database to keep track of their DVD collection, we'd do a "create database ..." with "create table ...". If they enter some data in and then decide they want to change a column we'd do an "alter table ....". Now, the further along I get with building this out the more it seems like MySQL is poorly suited to handling this. 1) My first concern is that switching databases every request, first to our main app's database for authentication etc, and then to the user's personal database, is going to be inefficient. 2) The second concern I have is that there's going to be a limit to the number of databases a single MySQL server can host. Pretending for a moment this application had 500,000 user databases, is MySQL designed to operate this way? What if it were a million, or more? 3) Lastly, is this method going to be a nightmare to support and scale? I've never heard of MySQL being used in this way so I do worry about how this affects things like replication and other methods of scaling. To me, it seems like MySQL wasn't built to be used in this way but what do I know. I've been looking at document-based databases like MongoDB, CouchDB, and Redis as alternatives because it seems like a schema-less approach to this particular problem makes a lot of sense. Can anyone offer some advice on this?

    Read the article

  • what's the performance difference between int and varchar for primary keys

    - by user568576
    I need to create a primary key scheme for a system that will need peer to peer replication. So I'm planning to combine a unique system ID and a sequential number in some way to come up with unique ID's. I want to make sure I'll never run out of ID's, so I'm thinking about using a varchar field, since I could always add another character if I start running out. But I've read that integers are better optimized for this. So I have some questions... 1) Are integers really better optimized? And if they are, how much of a performance difference is there between varchars and integers? I'm going to use firebird for now. But I may switch later. Or possibly support multiple db's. So I'm looking for generalizations, if that's possible. 2) If integers are significantly better optimized, why is that? And is it likely that varchars will catch up in the future, so eventually it won't matter anyway? My varchar keys won't have any meaning, except for the unique system ID part. But I may want to obscure that somehow. Also, I plan to efficiently use all the bits of each character. I don't, for example, plan to code the integer 123 as the character string "123". So I don't think varchars will require more space than integers.

    Read the article

  • Structure map and generics (in XML config)

    - by James D
    Hi I'm using the latest StructureMap (2.5.4.264), and I need to define some instances in the xml configuration for StructureMap using generics. However I get the following 103 error: Unhandled Exception: StructureMap.Exceptions.StructureMapConfigurationException: StructureMap configuration failures: Error: 103 Source: Requested PluginType MyTest.ITest`1[[MyTest.Test,MyTest]] configured in Xml cannot be found Could not create a Type for 'MyTest.ITest`1[[MyTest.Test,MyTest]]' System.ApplicationException: Could not create a Type for 'MyTest.ITest`1[[MyTest.Test,MyTest]]' ---> System.TypeLoadException: Could not loa d type 'MyTest.ITest`1' from assembly 'StructureMap, Version=2.5.4.264, Culture=neutral, PublicKeyToken=e60ad81abae3c223'. at System.RuntimeTypeHandle._GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark, Boolean loadTypeFromPartialName) at System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark) at System.RuntimeType.PrivateGetType(String typeName, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& s tackMark) at System.Type.GetType(String typeName, Boolean throwOnError) at StructureMap.Graph.TypePath.FindType() --- End of inner exception stack trace --- at StructureMap.Graph.TypePath.FindType() at StructureMap.Configuration.GraphBuilder.ConfigureFamily(TypePath pluginTypePath, Action`1 action) A simply replication of the code is as follows: public interface ITest<T> { } public class Test { } public class Concrete : ITest<Test> { } Which I then wish to define in the XML configuration something as follows: <DefaultInstance PluginType="MyTest.ITest`1[[MyTest.Test,MyTest]],MyTest" PluggedType="MyTest.Concrete,MyTest" Scope="Singleton" /> I've been racking my brain, however I can't see what I'm doing wrong - I've used Type.GetType to verify the type actually is valid which it is. Anyone have any ideas? Thanks !

    Read the article

  • How to run stored procedures and ad-hoc scripts asynchronously with "loosely" connected SQL Server 2

    - by sanga
    Is there a way to initiate a script against an instance of SQL server when it is not connected then have it run on the instance the next time it connects? This needs to happen without any intervention from me. Background situation if you are interested: We have about 120 machines each with their own instance of SQL Server 2000. Most of them are laptops. We have merge replication set up with each one. From time to time, there is a need to delete "rogue" guids from some tables in some instances that overwrite legitimate records on the main publisher as well as perform administrative tasks via stored procedure or adhoc sql statements. The problem is there is no telling when each machine is going to be connected to the network. Some folks turn their machines completely off at the end of the day. Others disconnect their machines and take them on business trips, home for the weekend etc. Did I mention that about 35 of these machines are in utility trucks and "attempt" to sync over a wireless connection. Thanks in advance for any assistance or suggestions. Sanga

    Read the article

  • subset complete or balance dataset in r

    - by SHRram
    I have a dataset that unequal number of repetition. I want to subset a data by removing those entries that are incomplete (i.e. replication less than maximum). Just small example: set.seed(123) mydt <- data.frame (name= rep ( c("A", "B", "C", "D", "E"), c(1,2,4,4, 3)), var1 = rnorm (14, 3,1), var2 = rnorm (14, 4,1)) mydt name var1 var2 1 A 2.439524 3.444159 2 B 2.769823 5.786913 3 B 4.558708 4.497850 4 C 3.070508 2.033383 5 C 3.129288 4.701356 6 C 4.715065 3.527209 7 C 3.460916 2.932176 8 D 1.734939 3.782025 9 D 2.313147 2.973996 10 D 2.554338 3.271109 11 D 4.224082 3.374961 12 E 3.359814 2.313307 13 E 3.400771 4.837787 14 E 3.110683 4.153373 summary(mydt) name var1 var2 A:1 Min. :1.735 Min. :2.033 B:2 1st Qu.:2.608 1st Qu.:3.048 C:4 Median :3.120 Median :3.486 D:4 Mean :3.203 Mean :3.688 E:3 3rd Qu.:3.446 3rd Qu.:4.412 Max. :4.715 Max. :5.787 I want to get rid of A, B, E from the data as they are incomplete. Thus expected output: name var1 var2 4 C 3.070508 2.033383 5 C 3.129288 4.701356 6 C 4.715065 3.527209 7 C 3.460916 2.932176 8 D 1.734939 3.782025 9 D 2.313147 2.973996 10 D 2.554338 3.271109 11 D 4.224082 3.374961 Please note the dataset is big, the following may not a option: mydt[mydt$name == "C",] mydt[mydt$name == "D", ]

    Read the article

  • How to increase my "advanced" knowledge of PHP further? (quickly)

    - by Kerry
    I have been working with PHP for years and gotten a very good grasp of the language, created many advanced and not-so-advanced systems that are working very well. The problem I'm running into is that I only learn when I find a need for something that I haven't learned before. This causes me to look up solutions and other code that handles the problem, and so I will learn about a new function or structure that I hadn't seen before. It is in this way that I have learned many of my better techniques (such as studying classes put out by Amazon, Google or other major companies). The main problem with this is the concept of not being able to learn something if you don't know it exists. For instance, it took me several months of programming to learn about the empty() function, and I simply would check the string length using strlen() to check for empty values. I'm now getting into building bigger and bigger systems, and I've started to read blogs like highscalability.com and been researching MySQL replication and server data for scaling. I know that structure of your code is very important to make full systems work. After reading a recent blog about reddit's structure, it made me question if there is some standard or "accepted systems" out there. I have looked into frameworks (I've used Kohana, which I regretted, but decided that PHP frameworks were not for me) and I prefer my own library of functions rather than having a framework. My current structure is a mix between WordPress, Kohana and my own knowledge. The ways I can see as being potentially beneficial are: Read blogs Read tutorials Work with someone else Read a book What would be the best way(s) to "get to the next level" the level of being a very good system developer?

    Read the article

  • SSH tunneling with Synology

    - by dvkch
    I try to tunnel SMB and AFP services through SSH to acces my NAS shares on my machine. I already do it successfully with my ReadyNAS using the following command line (ran as my user on my mac) : ssh -Nf -p 22 -c 3des-cbc USER@SERVER -L 8888/127.0.0.1/548 -L 9999/127.0.0.1/139 but I cannot reproduce the same with the Synology NAS. Connecting using this command gives me the following error : channel 4: open failed: administratively prohibited: open failed I also tried with a windows client (used bitvise tunneler): it works with the ReadyNAS but not the Synology and get the following error msg : server denied request for client-side server-2-client forwarding on 127.0.0.1:139 I modified /etc/ssh/sshd_config : MaxSessions 10 PasswordAuthentication yes PermitEmptyPasswords no AllowTcpForwarding yes GatewayPorts yes PermitTunnel yes Is there any way to make it work ? I must add that I can successfully connect via SSH to the NAS so I donnot think this is a firewall issue between the Synology and my computer. Thanks for you answers

    Read the article

  • RRAS on Windows Server 2012 box

    - by TerminalTox1n
    I'm trying to add the RRAS VPN roles into my server 2012 box. The error I am getting is: install-windowsfeature : The request to add or remove features on the specified server failed. Installation of one or more roles, role services, or features failed. One or several parent features are disabled so current feature can not be enabled. Error: 0xc004000d At line:1 char:1 + install-windowsfeature -name directaccess-vpn + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidOperation: (@{Vhd=; Credent...Name=localhost}:PSObject) [Install-WindowsFeature], Exception + FullyQualifiedErrorId : DISMAPI_Error__Failed_To_Enable_Updates,Microsoft.Windows.ServerManager.Commands.AddWind owsFeatureCommand This box is running as a domain controller. Does anybody have any insight on having server 2012 running a domain controller and VPN endpoint on the same box? Thanks!

    Read the article

  • Environment variable ORACLE_UNQNAME not defined. Please set ORACLE_UNQNAME to database unique name

    - by Tapas Bose
    I have a batch file which starts the Oracle Services net start OracleOraDb11g_home1TNSListener net start OracleServiceORCL call C:\app\Edifixio\product\11.2.0\dbhome_1\BIN\emctl.bat start dbconsole pause But on executing the script I am getting: C:\windows\system32>net start OracleOraDb11g_home1TNSListener The requested service has already been started. More help is available by typing NET HELPMSG 2182. C:\windows\system32>net start OracleServiceORCL The OracleServiceORCL service is starting......... The OracleServiceORCL service was started successfully. C:\windows\system32>call C:\app\Edifixio\product\11.2.0\dbhome_1\BIN\emctl.bat start dbconsole Environment variable ORACLE_UNQNAME not defined. Please set ORACLE_UNQNAME to database unique name. Press any key to continue . . . I am using Windows 7 64 bit with Oracle 11gR2 64 bit. Any information will be very helpful. Thanks and Regards.

    Read the article

  • VMWare Server 2 Install is Failing w/ Error 25032: "failed to customize windows logon process"

    - by Justin Searls
    VMWare Server 2 install question here.* Straightforward question that would probably require a VMWare expert to pull apart, given that Google has been totally worthless on this. On a patched Windows XP machine, any attempt to install VMWare Server 2.0.1 results in failure, just prior to completion (progress bar is full but I can tell network adapter stuff hasn't been fired yet and most of the services haven't been instaled). The error: Error 25032. Failed to customize Windows logon process (). Please contact your administrator. Upon dismissing the error, you're treated to: Warning 25033. Failed to remove Windows logon customization (VMGINA.DLL). Please contact your administrator. Clicking "OK" rolls back your installation. Killing the installer and hoping that it somehow leaves a working install behind was also unproductive. *I hope install troubleshooting isn't outside the purview of serverfault, I'm typically an SO user.

    Read the article

  • Unable to create a VSS snapshot of the source

    - by SuperFurryToad
    The following error is preventing me from cloning a Windows 7 64bit computer. Unable to create a VSS snapshot of the source volume(s). Error code: 2147754754 (0x80042302) I'm using VMWare vCenter Converter Standalone Client 4.0.1. Any ideas on what might be causing this? When I checked the services running, I noticed that the Volume Shadow Copy Service was set to manual. So I started the service and switched it to automatic. It still didn't work after that. I checked the event logs and I got the following errors: Event ID: 22 Description: Volume Shadow Copy Service error: A critical component required by the Volume Shadow Copy service is not registered. This might happened if an error occurred during Windows setup or during installation of a Shadow Copy provider. The error returned from CoCreateInstance on class with CLSID {e579ab5f-1cc4-44b4-bed9-de0991ff0623} and Name IVssCoordinatorEx2 is [0x80040154, Class not registered Event ID: 8193 Description: Volume Shadow Copy Service error: Unexpected error calling routine CoCreateInstance. hr = 0x80040154, Class not registered

    Read the article

  • Sharepoint 2010, 404 error after installation

    - by Tommy Jakobsen
    Running Windows Server 2008 Standard R2, SQL Server 2008 Enterprise, Team Foundation Server 2010, I installed Sharepoint Server 2010 (single server). It installed correctly, and the wizard configured it without errors. When accessing the sharepoint server through http://localhost/ I get a 404 error. I also get a 404 when trying to access the admin interface on port 42620. Sharepoint, TFS and Reporting services are the only application on my IIS. NOT sharing the same port, so that can't be the error. Do you have any ideas what the problem can be? Is there some way that I can debug this?

    Read the article

  • Using ADFS 2.0 for Google apps single sign on

    - by Zoredache
    Microsoft Active Directory Federation Services 2.0 has been recently released, and it has passed interoperability tests for SAML 2.0. Does this mean that is can be used to authenticate users of Google Apps which also uses SAML? Has anyone successfully setup Google apps with ADFS 2.0 for single sign on? If you have gotten it to work please tell us what is required to get this working? To put it another way, does someone have a good HOWTO for using ADFS 2.0 and Google Apps together? I was not able to find anything through a search of the web.

    Read the article

  • .NET Framework 4 updates breaking MMC.exe and other CLR.dll Exceptions

    - by Fox
    I've seen this issue floating around the net the last few weeks and I'm facing exactly the same issue. My servers are set to auto install updates using Windows update (not clever, I know), and since about 2 months ago, I've been getting strange Exceptions. The first thing that happens is that MMC.exe just crashes randomly and sometimes on startup of the console. The exception in the Windows Application log is as follow: Faulting application name: mmc.exe, version: 6.1.7600.16385, time stamp: 0x4a5bc808 Faulting module name: mscorwks.dll, version: 2.0.50727.5448, time stamp: 0x4e153960 Secondly, on the same server, I have some custom Windows services which constantly crash with exceptions : Faulting application name: Myservice.exe, version: 1.0.0.0, time stamp: 0x4f44cb11 Faulting module name: clr.dll, version: 4.0.30319.239, time stamp: 0x4e181a6d Exception code: 0xc0000005 Fault offset: 0x000378aa The exception is not in my code. I've tested and retested it. My server has the following .NET Framework updates installed: Does anyone have any idea?

    Read the article

  • Web server connection to SQL Server: Response Packet [Malformed Packet]

    - by John Murdoch
    I am seeing very, very sluggish performance between my web server (which handles HTTP web services connections) and a separate server running Microsoft SQL Server 2008. I have been capturing packet traffic on the web server trying to understand why things are running so slowly. I am using Wireshark to capture the packet traffic. The apparent problem is that the web server is sending TDS packets to the data server--each packet followed by a response from the data server with Response Packet [Malformed Packet] in the Info field. The packet sent from the web server appears to have an invalid checksum. Has anyone seen this type of problem before? Any ideas?

    Read the article

< Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >