Search Results

Search found 25440 results on 1018 pages for 'agent based modeling'.

Page 784/1018 | < Previous Page | 780 781 782 783 784 785 786 787 788 789 790 791  | Next Page >

  • IPC in C under linux

    - by poly
    I'm building a messaging solution with the followingsetup: all the messages are saved on a DB, two or more reader processes will read from this DB and send data to other process(es) which will send it over the network. My approach is depicted below, The following have 4 sender process with 4 fifos, and 2 readers with 2 fifos reader0 ? read data from DB reader1 ? read data from DB sending part network_handler0 ? network_handler_fifo0 ? reader0 network_handler1 ? network_handler_fifo1 ? reader1 network_handler2 ? network_handler_fifo2 ? reader0 network_handler3 ? network_handler_fifo3 ? reader1 receiving part network_handler0 ? reader_fifo0 ? reader0 ? write to DB network_handler1 ? reader_fifo1 ? reader1 ? write to DB network_handler2 ? reader_fifo0 ? reader0 ? write to DB network_handler3 ? reader_fifo1 ? reader1 ? write to DB I have few problem with this setup, and please note that the number of processes could be more than that based on the environment, so I could make it 20 readers and 10 network_handlers or it it could as shown above. The size of the buffer is 64K and the message size is 200k, is this small enough to make the write/read to/from fifo atomic? How can make the processes aware of each other, so for example, reader 0 writes to network_handler_fifo0 and network_handler_fifo2, how can I make it start writing on other fifo if the current ones are full or their network_handlers are dea d I thought about making the reader process writing more general in writing, so for example it writes to all network fifos using lock mechanism and stop writing on the one that its process dead, I didn't use it as lock mechanism could slow thing down. BTW, each network_handler is an SCTP association, so network_handler0 is association 0, network_handler1 is association 1 and so on. Any idea is appreciated. I mean even if I have to change the setup above.

    Read the article

  • Application workflow

    - by manseuk
    I am in the planning process for a new application, the application will be written in PHP (using the Symfony 2 framework) but I'm not sure how relevant that is. The application will be browser based, although there will eventually be API access for other systems to interact with the data stored within the application, again probably not relavent at this point. The application manages SIM cards for lots of different providers - each SIM card belongs to a single provider but a single customer might have many SIM cards across many providers. The application allows the user to perform actions against the SIM card - for example Activate it, Barr it, Check on its status etc Some of the providers provide an API for doing this - so a single access point with multiple methods eg activateSIM, getStatus, barrSIM etc. The method names differ for each provider and some providers offer methods for extra functions that others don't. Some providers don't have APIs but do offer these methods by sending emails with attachments - the attachments are normally a CSV file that contains the SIM reference and action required - the email is processed by the provider and replied to once the action has been complete. To give you an example - the front end of my application will provide a customer with a list of SIM cards they own and give them access to the actions that are provided by the provider of each specific SIM card - some methods may require extra data which will either be stored in the backend or collected from the user frontend. Once the user has selected their action and added any required data I will handle the process in the backend and provide either instant feedback, in the case of the providers with APIs, or start the process off by sending an email and waiting for its reply before processing it and updating the backend so that next time the user checks the SIM card its status is correct (ie updated by a backend process). My reason for creating this question is because I'm stuck !! I'm confused about how to approach the actual workflow logic. I was thinking about creating a Provider Interface with the most common methods getStatus, activateSIM and barrSIM and then implementing that interface for each provider. So class Provider1 implements Provider - Then use a Factory to create the required class depending on user selected SIM card and invoking the method selected. This would work fine if all providers offered the same methods but they don't - there are a subset which are common but some providers offer extra methods - how can I implement that flexibly ? How can I deal with the processes where the workflow is different - ie some methods require and API call and value returned and some require an email to be sent and the next stage of the process doesn't start until the email reply is recieved ... Please help ! (I hope this is a readable question and that this is the correct place to be asking) Update I guess what I'm trying to avoid is a big if or switch / case statement - some design pattern that gives me a flexible approach to implementing this kind of fluid workflow .. anyone ?

    Read the article

  • Running MS Access Programs

    - by fredhappier
    I have an old program developed in MS Access and would like to convert it to Kexi somehow. The program on Windows is launched with Access. Is there any way that Kexi can launch this program? I know my way around Ubuntu and the terminal, but not well versed on databases. Once you make something in Kexi how do you "run" it or "view" what you've made? So far I am able to import the MDB file into Kexi and see all of the database data, but that is as far I have gone. The program was made by a relative years ago for my dad. I myself am an Ubuntu only user for 6+ years now and have no intentions to touch Windows and am looking for a linux solution. My dad is also an Ubuntu user, hence why Im looking for a solution. If Kexi cannot launch and run an MDB file, what else can I try? Anything browser based? Any tips or direction would be extremely helpful. I spoke to my brother who originally made the program. I told him about Kexi and here is what he said. Does any of this make sense? Thanks. This is how I would try to get them to work: Stand alone setup - after import, look for an option where you designate which form object you want to open upon startup. It might be in the tools tab in the picture below. After you save that change, it re-start it and it should work. Front end/back end setup - Do what I suggested for the stand alone setup to the "front-end" MDB file. After you do that, put the other file (table MDB file) where you want it to reside on the network. Now, open back up the "front end" file and look for an option that will allow you to "connect" to those tables in the other file. It looks like it could be in the "External data" tab in the picture below. For this setup, you may need to do these two tasks in the reverse order I just mentioned. Thanks! Fred

    Read the article

  • Are there software options (preferabbly .NET) for doing distance and speed analysis of footballers moving on video?

    - by Anonymous Type
    Editing Question for Clarity Thanks for feedback so far, very insightful. I'm not sure how far along this part of the software community is, and what if any libraries exist for me to leverage from. Heres what I'm trying to do. Problem: Take an existing video of a game of rugby league. The Rugby League field is 100 metres long, 70 metres wide, and has white line markings every 10 metres running along the width of the field, as well as along the sidelines. Each side has 13 players on the field. Players on each team have identical jerseys that normally constrast strongly against background colours (green/brown field colour) and the referee's colour (usually yellow) and the designated water runner (orange). All players have a unique number in thick white lettering on their backs for identification. Video is taken with a high definition camera. Currently only one camera is used (2D) and existing video does not contain a foreground object of fixed spatial dimensions (as suggested in one answer for comparision measurements, however I could add this to future filming sessions if it is worthwhile). The player's do not run in a straight line 50% of the time but will go sideways on on a diagonal to the play the ball. The distance measured always starts from the spot of the previous "tackle", which ends where the player stops forward movement. It is not always possible to determine the players number from the video (facing other direction, sunlight, others standing in the way of the camera). But this isn't important as the software could allow for manual inputting of unknown "runs" at a later point after analysis. Determine the distance between two points (i.e. where the player started his "run" and where he finished it). I'm guessing that this would be quite doable if I manually marked the start and end point in the video. But how would I use landmarks in the background to determine the distance (assuming the person taking the video has kept it from jerking around). Question: Do software packages or libraries exist that are specialised enough to assist with writing analysis software to determine a sports persons distance travelled based on video taken of the performance?

    Read the article

  • How should I refactor switch statements like this (Switching on type) to be more OO?

    - by Taytay
    I'm seeing some code like this in our code base, and want to refactor it: (Typescript psuedocode follows): class EntityManager{ private findEntityForServerObject(entityType:string, serverObject:any):IEntity { var existingEntity:IEntity = null; switch(entityType) { case Types.UserSetting: existingEntity = this.getUserSettingByUserIdAndSettingName(serverObject.user_id, serverObject.setting_name); break; case Types.Bar: existingEntity = this.getBarByUserIdAndId(serverObject.user_id, serverObject.id); break; //Lots more case statements here... } return existingEntity; } } The downsides of switching on type are self-explanatory. Normally, when switching behavior based on type, I try to push the behavior into subclasses so that I can reduce this to a single method call, and let polymorphism take care of the rest. However, the following two things are giving me pause: 1) I don't want to couple the serverObject with the class that is storing all of these objects. It doesn't know where to look for entities of a certain type. And unfortunately, the identity of a type of ServerObject varies with the type of ServerObject. (So sometimes it's just an ID, other times it's a combination of an id and a uniquely identifying string, etc). And this behavior doesn't belong down there on those subclasses. It is the responsibility of the EntityManager and its delegates. 2) In this case, I can't modify the ServerObject classes since they're plain old data objects. It should be mentioned that I've got other instances of the above method that take a parameter like "IEntity" and proceed to do almost the same thing (but slightly modify the name of the methods they're calling to get the identity of the entity). So, we might have: case Types.Bar: existingEntity = this.getBarByUserIdAndId(entity.getUserId(), entity.getId()); break; So in that case, I can change the entity interface and subclasses, but this isn't behavior that belongs in that class. So, I think that points me to some sort of map. So eventually I will call: private findEntityForServerObject(entityType:string, serverObject:any):IEntity { return aMapOfSomeSort[entityType].findByServerObject(serverObject); } private findEntityForEntity(someEntity:IEntity):IEntity { return aMapOfSomeSort[someEntity.entityType].findByEntity(someEntity); } Which means I need to register some sort of strategy classes/functions at runtime with this map. And again, I darn well better remember to register one for each my my types, or I'll get a runtime exception. Is there a better way to refactor this? I feel like I'm missing something really obvious here.

    Read the article

  • Trying to find resources to learn how to test software [closed]

    - by Davek804
    First off, yes this is a general question, and I'd be perfectly happy to move this to another portion of SE, but I didn't see a more fitting sub. Basically, I am hoping a more experienced QA tester can come along and really fill in some basics for me. So far, websites seem to be sparse in terms of explaining languages involved, basic practices, etc. So, I'm sorry in advance if this is too general, but towards the end of this post I ask some specific questions if it's just absolutely unacceptable to speak in general terms. I just landed a position as Junior Systems and QA Engineer with a social media startup. Their QA and testing is almost nonexistent, so if I do a good job, I imagine I'll find a lot of bugs and have a secure role in the business. I'm pretty good with the systems aspect of my role, but I need to learn more about the QA and testing aspects. We run hardware that's touchscreen based - the user can use and interact with the devices. So, in terms of my QA role, in the short term, I need to build scripts to test the hardware/software as a 'user' to try to uncover bugs. First off, what language should these scripts be written in? Does anyone have some examples? What about the longer term 'automated testing'? I'm familiar with regression testing as the developer adds in new features, sure, but the 50,000 other types of testing, not so much. Most of our hardware runs dotnet/C# code, with some of the servers running Java - but I don't expect to need to run tests on the Java side at this point. I hope to meet with one developer today and try to get a good idea of the output from the hardware so that I can 'mock' this data that gets sent to servers, to try to bugtest. Eventually, we will be moving the hardware to be closer to where I live and work, so that I can test virtually and on real hardware. So a lot of the bugs we're dealing with now are like this: the Local Server, which kiosks report their data to gets updated from the kiosks, but the remote server does not. Or, vis versa when the user registers on a kiosk, the remote server updates but the local server does not. But yeah, without much more detail, I imagine a lot of this info isn't helpful. I've bought a book "How Google Tests Software", but it's really a book more about 'how their software testing is different from Microsoft'. It doesn't teach how to test so much as why their methods are better. Does anyone have a good book that I can buy? An ebook maybe? My local Barnes and Noble kinda had a terrible selection. I also figure a book from 2005 is not necessarily that good either.

    Read the article

  • How to check if a variable is an integer? [closed]

    - by FyrePlanet
    I'm going through my C++ book and have currently made a working Guess The Number game. The game generates a random number based on current time, has the user input their guess, and then tells them whether it was too high, too low, or the correct number. The game functions fine if you enter a number, but returns 'Too High' if you enter something that is not a number (such as a letter or punctuation). Not only does it return 'too high', but it also continually returns it. I was wondering what I could do to check if the guess, as input by the user, is an integer and if it is not, to return 'Not a number. Guess again.' Here is the code. #include <iostream> #include <cstdlib> #include <ctime> using namespace std; int main() { srand(time(0)); // seed the random number generator int theNumber = rand() % 100 + 1; // random number between 1 and 100 int tries = 0, guess; cout << "\tWelcome to Guess My Number!\n\n"; do { cout << "Enter a guess: "; cin >> guess; ++tries; if (guess > theNumber) cout << "Too high!\n\n"; if (guess < theNumber) cout << "Too low!\n\n"; } while (guess != theNumber); cout << "\nThat's it! You got it in " << tries << " guesses!\n"; cout << "\nPress Enter to exit.\n"; cin.ignore(cin.rdbuf()->in_avail() + 1); return 0; }

    Read the article

  • HAProxy reqrep remove URI on backend request

    - by Jim
    real quick question regarding HAProxy reqrep. I am trying to rewrite/replace the request that gets sent to the backend. I have the following example domain and URIs http://domain/web1 http://domain/web2 I want web1 to go to backend webfarm1, and web2 to go to webfarm2. Currently this does happen. However I want to strip off the web1 or web2 URI when the request is sent to the backend. Here is my haproxy.cfg frontend webVIP_80 mode http bind :80 #acl routing to backend acl web1_path path_beg /web1 acl web2_path path_beg /web2 #which backend use_backend webfarm1 if web1_path use_backend webfarm2 if web2_path default_backend webfarm1 backend webfarm1 mode http reqrep ^([^\ ]*)\ /web1/(.*) \1\ /\2 balance roundrobin option httpchk HEAD /index HTTP/1.1\r\nHost:\ example.com server webtest1 10.0.0.10:80 weight 5 check slowstart 5000ms server webtest2 10.0.0.20:80 weight 5 check slowstart 5000ms backend webfarm2 mode http reqrep ^([^\ ]*)\ /web2/(.*) \1\ /\2 balance roundrobin option httpchk HEAD /index HTTP/1.1\r\nHost:\ example.com server webtest1-farm2 10.0.0.110:80 weight 5 check slowstart 5000ms server webtest2-farm2 10.0.0.120:80 weight 5 check slowstart 5000ms If I go to http://domain/web1 or http://domain/web2 I see it in the error logs that the request on a server in each backend that the requst is for the resource /web1 or /web2 respectively. Therefore I believe there to be something wrong with my regular expression, even though I copied and pasted it from the Documentation. http://code.google.com/p/haproxy-docs/wiki/reqrep Summary: I'm trying to route traffic based on URI, however I want to strip the URI on the backend side. Go to http://domain/web1 -- backend request of / to webfarm1 Thank you! -Jim

    Read the article

  • SSTP client disconnects shortly after successfully connected to VPN

    - by Eran Betzalel
    I'm successfully authenticating and connecting to a SSTP VPN (on windows 2008) from my windows 7 machine, but for some reason, the connection is disconnected about a 1-2 seconds after it's established. I've done the following: Defined a SSTP VPN on my windows server 2008. Defined the same machine as CA. Issued the needed certificates and published them on the client. I'm currently testing this VPN inside my LAN so all the needed ports are opened. Here are the event log entries when trying to connect: Error Log (Client): The user HOME\User dialed a connection named Home VPN which has terminated. The reason code returned on termination is 829. Error Log (Server-VPN): The user HOME\User connected on port VPN0-0 on 7/27/2012 at 1:57 AM and disconnected on 7/27/2012 at 1:57 AM. The user was active for 0 minutes 0 seconds. 312 bytes were sent and 4528 bytes were received. The reason for disconnecting was user request. What would be the issue? How can I resolve or debug it? UPDATE: I've found an event log (Log=System, Source=RasSstp) message on the windows 7 machine that tries to connect to the VPN: The SSTP-based VPN connection to the remote access server was terminated because of a security check failure. Security settings on the remote access server do not match settings on this computer. Contact the system administrator of the remote access server and relay the following information: SHA1 Certificate Hash: 065D681...520375552F SHA256 Certificate Hash: 18DED363...EEEE28CFD00

    Read the article

  • Ubuntu boot hangs after message "Running /scripts/init-bottom ... done"

    - by Douglas B. Staple
    I've been trying to copy a Proxmox container based on the Ubuntu Precise Standard template to a VirtualBox VM. I am now stuck at a point where my new Ubuntu/VirtualBox VM hangs after the message "Running /scripts/init-bottom ... done" during boot. I started by installing Ubuntu Server 12.04.4 LTS on a VirtualBox VM. Ubuntu Server 12.04.4 LTS was the closest "official" Ubuntu ISO to the Proxmox container OS I could find. I installed all updates on both the Proxmox container and on the VirtualBox VM. The idea was to get same version kernal running on the ProxMox container and VirtualBox VM. sudo apt-get update ; sudo apt-get upgrade ; sudo apt-get dist-upgrade sudo reboot rsync the entire proxmox container to a temporary directory in the VirtualBox VM: cd / mkdir /tmp/backup rsync -e ssh -av --exclude={/dev,/proc,/sys,/tmp,/run,/mnt,/media,/lost+found,/boot,/selinux} root@my_proxmox_container_hostname:/ /tmp/backup Shut down the virtual machine, and boot the VM with a bootable linux image. I used the Desktop image of Ubuntu 12.04 LTS, ubuntu-12.04.4-desktop-i386.iso Drop to a root prompt. Mount the VM root filesystem: sudo mount /dev/sda1 /mnt Remove files from most of /mnt cd /mnt sudo rm -rf bin etc home lib opt sbin root usr var Move all of the files from /mnt/backup into /mnt sudo mv /mnt/tmp/backup/* /mnt Rebooted system. For me, at this point the system freezes after starting, after the message: Running /scripts/init-bottom ... done I've tried reinstalling GRUB and all manner of other thing. I am almost ready to give up.

    Read the article

  • Request Entity Too Large error while uploading files of more than 128KB over SSL

    - by tushar
    We have a web portal setup on Java spring framework. It running on tomcat app server. Portal is served through apache web server connected to tomcat through JK connector. Entire portal is HTTPS enabled using 443 port of apache. Apache version is : Apache/2.4.2 (Unix). it is the latest stable version of apache web server. Whenever we try to upload files more than 128 KB into the portal, We are facing 413 error: Request Entity Too Large The requested resource /teamleadchoachingtracking/doFileUpload does not allow request data with POST requests, or the amount of data provided in the request exceeds the capacity limit. In the apache error log we get the following errors: AH02018: request body exceeds maximum size (131072) for SSL buffer AH02257: could not buffer message body to allow SSL renegotiation to proceed We did search over google and there were suggestions to put SSLRenegBufferSize as some high value like 10MB. Based on these suggestions, we had put the following entry in virtualhost section of httpd config file: SSLRenegBufferSize 10486000 But still the error persists. Also we have specified SSLVerifyClient none, but still renegotiation is happening. This is a very inconsistent and frustrating error. Any help will be highly appreciated. Many thanks in advance.

    Read the article

  • Networking stopped working on Ubuntu

    - by 1337Rooster
    I installed Ubuntu 10.04 through the Wubi installer (Funny, I installed it today and thought I would have gotten 10.10). I had a network connection and everything was working fine. I rebooted my coumputer a couple of times and then suddenly, I could not connect to the network and when I click the wireless/networking icon it says "Networking Disabled". I reinstalled Ubuntu and the problem went away. After a few reboots the problem returned. I have tried restarting to see if it would come back as well as a few other things listed below. Any other suggestions would be appreciated. Tried to restart networking via /etc/init.d/networking: amato@ubuntu:~$ sudo /etc/init.d/networking restart * Reconfiguring network interfaces... Ignoring unknown interface eth0=eth0. [ OK ] Tried to stop and start it: amato@ubuntu:~$ sudo /etc/init.d/networking stop * Deconfiguring network interfaces... [ OK ] amato@ubuntu:~$ amato@ubuntu:~$ sudo /etc/init.d/networking start Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service networking start Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the start(8) utility, e.g. start networking networking stop/waiting Tried start networking: amato@ubuntu:~$ start networking start: Rejected send message, 1 matched rules; type="method_call", sender=":1.58" (uid=1000 pid=2241 comm="start) interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply=0 destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")) amato@ubuntu:~$ sudo start networking networking stop/waiting Tried service networking restart: amato@ubuntu:~$ service networking restart restart: Rejected send message, 1 matched rules; type="method_call", sender=":1.60" (uid=1000 pid=2248 comm="restart) interface="com.ubuntu.Upstart0_6.Job" member="Restart" error name="(unset)" requested_reply=0 destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")) amato@ubuntu:~$ sudo service networking restart restart: Unknown instance: Here are the contents of my /etc/network/interfaces. auto lo iface lo inet loopback I even tried to modify it to this (based on something I read, online, not sure if I was doing the right thing here). Tried everything again and no luck: auto lo eth0 iface lo inet loopback iface eth0 inet dhcp

    Read the article

  • Unable to Install SQL Server on Server 2012

    - by Jeff
    The problem I have been trying to install SQL Server 2012 on Windows Server 2012. I continually get the same error: Managed SQL Server Installer has stopped working Problem signature: Problem Event Name: CLR20r3 Problem Signature 01: scenarioengine.exe Problem Signature 02: 11.0.3000.0 Problem Signature 03: 5081b97a Problem Signature 04: Microsoft.SqlServer.Chainer.Setup Problem Signature 05: 11.0.3000.0 Problem Signature 06: 5081b97a Problem Signature 07: 18 Problem Signature 08: 0 Problem Signature 09: System.IO.FileLoadException OS Version: 6.2.9200.2.0.0.272.79 Locale ID: 1033 Additional Information 1: c319 Additional Information 2: c3196e5863e32e0baf269d62f56cbc70 Additional Information 3: 422d Additional Information 4: 422d950c58f4efd1ef1d8394fee5d263 What I've tried After initial googling, I've tried the following things: Go through the list of hardware and software pre-reqs. All the software seems to be there by default on Server 2012 and my hardware meets the reqs. Copy the installation media to the local drive and try to install from that (rather than a DVD). This produced the same error. Based on another error message, I installed .NET 4.0 (which apparently is not on Server 2012 out of the box). Same error. Install from command line. This didn't work either, but it gave me a different error: Error: Unhandled Exception: System.IO.FileLoadException: Could not load file or assembl y 'Microsoft.SqlServer.Configuration.Sco, Version=11.0.0.0, Culture=neutral, Pub licKeyToken=89845dcd8080cc91' or one of its dependencies. Strong name validation failed. (Exception from HRESULT: 0x8013141A) ---> System.Security.SecurityExcep tion: Strong name validation failed. (Exception from HRESULT: 0x8013141A) --- End of inner exception stack trace --- at Microsoft.SqlServer.Chainer.Infrastructure.InputSettingService.CheckForBoo leanInputSettingExistenceFromCommandLine(ServiceContainer context, String settin gName) at Microsoft.SqlServer.Chainer.Setup.Setup.DebugBreak(ServiceContainer contex t) at Microsoft.SqlServer.Chainer.Setup.Setup.Main() Any ideas what I am missing?

    Read the article

  • xinetd 'connection reset by peer'

    - by ceejayoz
    I'm using percona-clustercheck (which comes with Percona's XtraDB Cluster packages) with xinetd and I'm getting an error when trying to curl the clustercheck service. /usr/bin/clustercheck: #!/bin/bash # # Script to make a proxy (ie HAProxy) capable of monitoring Percona XtraDB Cluster nodes properly # # Author: Olaf van Zandwijk <[email protected]> # Documentation and download: https://github.com/olafz/percona-clustercheck # # Based on the original script from Unai Rodriguez # MYSQL_USERNAME="clustercheckuser" MYSQL_PASSWORD="clustercheckpassword!" ERR_FILE="/dev/null" AVAILABLE_WHEN_DONOR=0 # # Perform the query to check the wsrep_local_state # WSREP_STATUS=`mysql --user=${MYSQL_USERNAME} --password=${MYSQL_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_local_state';" 2>${ERR_FILE} | awk '{if (NR!=1){print $2}}' 2>${ERR_FILE}` if [[ "${WSREP_STATUS}" == "4" ]] || [[ "${WSREP_STATUS}" == "2" && ${AVAILABLE_WHEN_DONOR} == 1 ]] then # Percona XtraDB Cluster node local state is 'Synced' => return HTTP 200 /bin/echo -en "HTTP/1.1 200 OK\r\n" /bin/echo -en "Content-Type: text/plain\r\n" /bin/echo -en "\r\n" /bin/echo -en "Percona XtraDB Cluster Node is synced.\r\n" /bin/echo -en "\r\n" exit 0 else # Percona XtraDB Cluster node local state is not 'Synced' => return HTTP 503 /bin/echo -en "HTTP/1.1 503 Service Unavailable\r\n" /bin/echo -en "Content-Type: text/plain\r\n" /bin/echo -en "\r\n" /bin/echo -en "Percona XtraDB Cluster Node is not synced.\r\n" /bin/echo -en "\r\n" exit 1 fi /etc/xinetd.mysqlchk: # default: on # description: mysqlchk service mysqlchk { # this is a config for xinetd, place it in /etc/xinetd.d/ disable = no flags = REUSE socket_type = stream port = 9200 wait = no user = nobody server = /usr/bin/clustercheck log_on_failure += USERID only_from = 10.0.0.0/8 127.0.0.1 # recommended to put the IPs that need # to connect exclusively (security purposes) per_source = UNLIMITED } When attempting to curl the service, I get a valid response (HTTP 200, text) but a 'connection reset by peer' notice at the end: HTTP/1.1 200 OK Content-Type: text/plain Percona XtraDB Cluster Node is synced. curl: (56) Recv failure: Connection reset by peer Unfortunately, Amazon ELB appears to see this as a failed check, not a succeeded one. How can I get clustercheck to exit gracefully in a manner that curl doesn't see a connection failure?

    Read the article

  • Mod Rewrite Help - Pseudo-Subdirectories

    - by Gimpyfuzznut
    I am dealing with a frustrating problem with Joomla that is going to require some url trickery. The idea is straight-forward but after reading a bunch of guides for mod-rewrite, I still can't seem to get it work. Let's say my site is www.mysite.com. Joomla is already performing some rewriting for SEF urls so I have links like www.mysite.com/home and www.mysite.com/news and so on. I want to be able to have (4) pseudo-subdirectories like www.mysite.com/mode1/ and www.mysite.com/mode2/ and so on. These subdirectories should work as if the subdirectory isn't there, ie both www.mysite.com/mode1/home and www.mysite.com/mode2/home should pull up the same www.mysite.com/home. It should point any www.mysite.com/mode1/anypagehere to www.mysite.com/anypagehere. The reason I am asking for this is because I will be reading the url for mode1, mode2, etc, to modify the template page. There will be a landing page that will direct people to /mode1/ and /mode2/ etc and the template will change based on that. Note, that I don't want to actually pass a parameter to the url accessible by a GET or whatever because Joomla removes it (perhaps because of my current mod_rewrite settings). I've pasted the current .htaccess file. RewriteBase /joomla ##########Rewrite rules to block out some common exploits RewriteCond %{QUERY_STRING} mosConfig_[a-zA-Z_]{1,21}(=|\%3D) [OR] # Block out any script trying to base64_encode crap to send via URL RewriteCond %{QUERY_STRING} base64_encode.*\(.*\) [OR] # Block out any script that includes a <script> tag in URL RewriteCond %{QUERY_STRING} (\<|%3C).*script.*(\>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Send all blocked request to homepage with 403 Forbidden error! RewriteRule ^(.*)$ index.php [F,L] ########## Begin - Joomla! core SEF Section RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !^/index.php RewriteCond %{REQUEST_URI} (/|\.php|\.html|\.htm|\.feed|\.pdf|\.raw|/[^.]*)$ [NC] RewriteRule (.*) index.php #RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L] ########## End - Joomla! core SEF Section

    Read the article

  • vCenter appliance won't use mail relay server

    - by Safado
    tl;dr: - sendmail is configured to use a relay server but still insists on using 127.0.01 as the relay, which results in mail not being sent. We have the open source vCenter appliance (v 5.0) managing our ESXi cluster. When connected to it via vSphere Client, you can configure the SMTP relay server to use by going to Administration > vCenter Server Settings > MAIL. There you can set the SMTP Server value. I looked through their documentation and also confirmed on the phone with support that all you have to do to configure mail is to put in the relay IP or fqdn in that box and hit OK. Well, I had done that and mail still wasn't sending. So I SSH into the server (which is SuSE) and look at /var/log/mail and it looks like it's trying to relay the email through 127.0.0.1 and it's rejecting it. So looking through the config files, I see there's /etc/sendmail.cf and /etc/mail/submit.cf. You can configure items in /etc/sysconfig/sendmail and run SuSEconfig --module sendmail to generate those to .cf files based on what's in /etc/sysconfig/sendmail. So playing around, I see that when you set the SMTP Server value in the vCenter gui, all that it does is change the "DS" line in /etc/mail/submit.cf to have DS[myrelayserver.com]. Looking on the internet, it would appear that the DS line is really the only thing you need to change in order to use a relay server. I got on the phone with VMWare support and spent 2 hours trying to modify ANY setting that had anything to do with relays and we couldn't get it to NOT use 127.0.0.1 as the relay. Just to note, any time we made any sort of configuration change, we restarted the sendmail service. Does anyone know whats going on? Have any ideas on how I can fix this?

    Read the article

  • How do you install/configure JBoss on Linux/Unix?

    - by mafro
    I'm currently working on how install and configure multiple (30+) JBoss EAP 5 configurations (both standalone and clusters) for development, test and production at a client's site (running SuSE). I'm not to fancy about the jboss way of storing application/configuration together with system files, so I have tried to split things up (ie moving server config out of the jboss installation directory). I also would like minimize the amount of configuration needed when upgrading/patching jboss - but I'm not done thinking about that... It would be great to hear how you've done and what you think about my approach. This is how my installations look like (for the moment): Standard JBoss EAP install (minus server configs): /opt/jboss/jboss-eap-5.0/jboss-as /opt/jboss/jboss-eap-5.0/jboss-as/bin/ /opt/jboss/jboss-eap-5.0/jboss-as/lib/ /opt/jboss/jboss-eap-5.0/jboss-as/server/ [server configs removed to avoid starting them by mistake] /opt/jboss/jboss-eap-5.0/jboss-as/.../ Application (some jboss folders has been omitted - you'll get the point anyway): /app/<project>/ [$app.dir - application specific base folder] /app/<project>/jboss/ [$jboss.home] /app/<project>/jboss/bin/ -> /opt/jboss/jboss-eap-5.0/jboss-as/bin /app/<project>/jboss/lib/ -> /opt/jboss/jboss-eap-5.0/jboss-as/lib /app/<project>/jboss/server/<cfg>/ [project specific config based on 'production'] /app/<project>/jboss/server/<cfg>/log/ -> /log/<project>/<cfg> /app/<project>/jboss/server/<cfg>/... /app/<project>/jboss/.../ -> /opt/jboss/jboss-eap-5.0/jboss-as/.../ /app/<project>/bin/ [application specific scripts for start/stop etc - wraps jboss supplied scripts] /app/<project>/deploy/ [application deploy folder] /app/<project>/etc/ [application specific config] Questions: How do you install JBoss (on linux/unix systems)? Where do you put JBoss and what modifications do you do? Where do you put your applications and application specific files? Do you share JBoss instances between applications or run one instance/cluster per application? How do you manage configuration changes (i.e. your modifications of jboss standard config)?

    Read the article

  • Failed Backup Job With Backup Exec 12 and AOFO

    - by Mort
    I am backing up a Windows 2003 Small Business Server with SP2. We are running Backup Exec 12 with SP4. Recently the backup job started failing on backing up the system state with the following error: V-79-57344-34110 - AOFO: Initialization failure on: "System?State". Advanced Open File Option used: Microsoft Volume Shadow Copy Service (VSS). Snapshot provider error (0xE000FE7D): Access is denied. To back up or restore System State, administrator privileges are required. Check the Windows Event Viewer for details. Upon review of Symantec's website the error indicates a credential problem. However when I test the credentials they come back with no failures. I have found another forum here referencing a similar error and have tried what has been indicated with no succesful results. I have created new jobs based on new selection lists with no succesful results. I suspect a new update possibly from Microsoft may be causing this but I have no idea which one. I am looking for feedback. Thanks.

    Read the article

  • Moving from ColdFusion 8 to ColdFusion 10 - Migration Fails

    - by XenoFoxx
    After having made several attempts to migrate from a ColdFusion 8 Standard server to a ColdFusion 10 Standard server, it feels like I am "almost" there. I'm using the 64 bit installer from Adobe's website. I'm using a Windows Server 2008 (64 bit) server with IIS 7.0. The installation itself goes smooth and the services start and are running. But at the end of the installation it says "ColdFusion Installed, but with errors" and it generates a log file. The log file reads: Migration Error: : Check that "C:\ColdFusion8" is a valid directory and is an installation of either ColdFusion MX 6 or ColdFusionMX 7 and further down says: Status: WARNING Additional Notes: WARNING - Could not migrate settings from previous version of ColdFusion Custom Action: com.macromedia.ia.action.MigrateColdFusionAction Status: ERROR Additional Notes: ERROR - class com.macromedia.ia.action.MigrateColdFusionAction NonfatalInstallException null The applicationHost.config file has new XML referencing the ColdFusion 10 directory, but IIS is still using ColdFusion 8. I'm also going to guess that the settings in the CF Administrator have not been migrated based on the message in the log above. I've followed the instructions on Adobe's site, including ensuring that ASP.NET, CGI, ISAPI Extensions, and ISAPI Filters are all enabled. I've also enabled IIS 6 Metabase Compatibility even though I don't think it's needed. Has anyone else had similar issues with ColdFusion 10 and IIS 7. Currently I have uninstalled CF 10 and reverted back to

    Read the article

  • Managing scalability and availability with two servers running Apache Httpd, Apache Mina and MySQL

    - by celalo
    Hello, I am not a developer and I don't much experience with scalable server architectures. But I am in need of a highly available and scalable system for one of my projects. There is going to be two servers I am going to use for the time being. Both with 4 core CPUs and 8 GB RAM with RAID structures running CentOS 5.4. I will also have feature called "Failover IP" which enables to direct an IP address to another server within short time. The applications which will be run on the servers: There is going to be a Java application based on Apache Mina server for handling TCP requests from some hundreds of network devices where the devices are going to send request as much as one request per minute. Handling those requests, includes parsing the requests and inserting a few rows to the Database. Parsing requests before inserting data to the DB does take neglectable time. There is going to be MySQL server, as I stated above. Also there is going to be a PHP web application running on Apache Httpd Server which uses the same DB with the Java application. What I wish to have is to make use of those two servers at the most. I was imagining to have the servers identical, sharing the work load. MySQL could be a cluster maybe? And if some application fails or the whole machine goes down, the other will continue serving the requests seamlessly. Reminding that a "Failover IP" feature will be available for me to take advantage of. Also, It should be kept in mind that number of servers could increase in time, to meet the demand. What can you suggest? Which kind of tools I can make use of? Which kind of monitoring software (paid/unpaid) I have? Thanks in advance.

    Read the article

  • MultiView 2000 terminal emulator not printing correctly to Generic/Text Printer on Windows 7

    - by FantaFan
    Guys & gals, Hope someone can shed some light on this. I am downloading reports from an AIX-based system by directing them to a TT printer which the terminal emulator (MultiView 2000) intercepts and directs to the default printer on the local system. This local printer is configured as a vanilla Generic/Text printer attached to a FILE port. When I print from AIX, the output is spooled down and the local printer prompts for a file name into which to save the file...but not under Windows 7. This has worked fine for many years, on both Win2K and WinXP. However, on Windows 7 the output gets spooled as a file into spool\PRINTERS (and looks as expected) but the print job then hangs with a status of "Error - Printing" and never prompts for a file name. I have to cancel the job. The Generic/Text printer works as expected with other applications. I have tried setting the printer to print directly rather than spooling but this only serves to hang the terminal session too. I've also tried to run the emulator in Windows 2000 Compatibility Mode and as Administrator in case it was something like that but with no luck. As you might expect, it does work fine in XP Mode (as long as I print to a printer defined therein and not the host's printer) but operationally this isn't going to be an option. Obviously this emulation software is a decade old (at least) and I could just cross/upgrade all the users (at a cost) but, before I do so, has anyone seen this sort of behaviour before and found some sort of fix? Remote OS: AIX 5 Client OS: Windows 7 Pro (32-bit) Printer: Generic/Text on a FILE port TE Software: MultiView 2000 (320-bit) Thanks in advance.

    Read the article

  • Nginx: Rewrite rule for subfolder

    - by gryzzly
    Hello, I have a subdomain where I want to keep projects I am working on, in order to show these projects to clients. Here is the configuraion file from /etc/nginx/sites-available/projects: server { listen 80; server_name projects.example.com; access_log /var/log/nginx/projects.example.com.access.log; error_log /var/log/nginx/projects.example.com.error.log; location / { root /var/www/projects; index index.html index.htm index.php; } location /example2.com { root /var/www/projects/example2.com; auth_basic "Stealth mode"; auth_basic_user_file /var/www/projects/example2.com/htpasswd; } location /example3.com/ { index index.php; if (-f $request_filename) { break; } if (!-f $request_filename) { rewrite ^/example3\.com/(.*)$ /example3\.com/index.php?id=$1 last; break; } } location ~ \.php { root /var/www/mprojects; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } I want to be able to place different php engines (wordpress, getsimple etc.) in subfolders. These engines have different querry parameters (id, q, url etc.) so in order to make preety URLs work I have to make a rewrite. However, above doesn't work. This is the response I get: Warning: Unknown: Filename cannot be empty in Unknown on line 0 Fatal error: Unknown: Failed opening required '' (include_path='.:/usr/local/lib/php') in Unknown on line 0 If I take out "location /example3.com/" rule, then everything works but with no preety URLs. Please help. The configuration is based on this post: http://stackoverflow.com/questions/2119736/cakephp-in-a-subdirectory-using-nginx-rewrite-rules I am using Ubuntu 9.10 and nginx/0.7.62 with php-fpm.

    Read the article

  • Apache: VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not sup

    - by user45761
    Hi, when i add the line below to /etc/apache2/apache2.conf I get the error belower when i restart apache: Include /usr/share/doc/apache2.2-common/examples/apache2/extra/httpd-vhosts.conf [Mon Jun 14 12:16:47 2010] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Mon Jun 14 12:16:47 2010] [warn] NameVirtualHost *:80 has no VirtualHosts This is my httpd-vhosts.conf file: # # Use name-based virtual hosting. # NameVirtualHost *:80 # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. <VirtualHost *:80> ServerName tirengarfio.com DocumentRoot /var/www/rs3 <Directory /var/www/rs3> AllowOverride All Options MultiViews Indexes SymLinksIfOwnerMatch Allow from All </Directory> Alias /sf /var/www/rs3/lib/vendor/symfony/data/web/sf <Directory "/var/www/rs3/lib/vendor/symfony/data/web/sf"> AllowOverride All Allow from All </Directory> </VirtualHost> Any idea? Regards Javi

    Read the article

  • Apache: VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not sup

    - by user45761
    Hi, when i add the line below to /etc/apache2/apache2.conf I get the error belower when i restart apache: Include /usr/share/doc/apache2.2-common/examples/apache2/extra/httpd-vhosts.conf [Mon Jun 14 12:16:47 2010] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Mon Jun 14 12:16:47 2010] [warn] NameVirtualHost *:80 has no VirtualHosts This is my httpd-vhosts.conf file: # # Use name-based virtual hosting. # NameVirtualHost *:80 # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. <VirtualHost *:80> ServerName tirengarfio.com DocumentRoot /var/www/rs3 <Directory /var/www/rs3> AllowOverride All Options MultiViews Indexes SymLinksIfOwnerMatch Allow from All </Directory> Alias /sf /var/www/rs3/lib/vendor/symfony/data/web/sf <Directory "/var/www/rs3/lib/vendor/symfony/data/web/sf"> AllowOverride All Allow from All </Directory> </VirtualHost> Any idea? Regards Javi

    Read the article

  • How to resolve Windows Update Error 8024402F on Windows 7 Home Premium 64bit?

    - by Day
    I have been having the same problem with Windows Updates on 2 of my machines at home, both running Windows 7 Home Premium 64-bit. One of the 2 machines is a brand new install, the other has run Windows Update in the past, but is also not working now. When I manually check for updates using the Control Panel, I get error code 8024402F: I followed the link to "Get help with this error", which brings up several articles in Windows Help and Support, none of which are for this specific error code. From the help and general googling I've tried: Checking internet connectivity. Most of the help suggests that this error is caused by a general internet connectivity problem. But if you're reading this, my connection is definitely working fine. Disabling antivirus temporarily and trying to run Windows Update. This didn't help (I run AVG free) Running Control Panel - Troubleshooting - Security Systems - Fix Problems with Windows Update. This said it detected and resolved problems, but didn't help. Update using IE (as I used to in XP). Go to http://windowsupdate.microsoft.com/ redirects me to http://test.update.microsoft.com/windowsupdate/v6/vistadefault.aspx for which IE displays a "connection problem" (i.e. site unreachable) I've had the same problem for 24 hours now, so surely the Windows Update servers haven't been down this whole time? A quick check on twitter shows no worldwide outcry about Windows Update being unavailable, so is it just me? I'm based in the UK, but I notice that the http://test.update.microsoft.com/windowsupdate/v6/vistadefault.aspx URL is also unavailable using ''wget'' from my webserver in Chicago. day@ord1:~$ wget http://test.update.microsoft.com/windowsupdate/v6/vistadefault.aspx --2011-03-17 00:01:27-- http://test.update.microsoft.com/windowsupdate/v6/vistadefault.aspx Resolving test.update.microsoft.com... failed: Name or service not known. wget: unable to resolve host address `test.update.microsoft.com' day@ord1:~$ host test.update.microsoft.com Host test.update.microsoft.com not found: 3(NXDOMAIN)

    Read the article

< Previous Page | 780 781 782 783 784 785 786 787 788 789 790 791  | Next Page >