Search Results

Search found 11051 results on 443 pages for 'bind variables'.

Page 244/443 | < Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >

  • Tmux causes Emacs glitch

    - by killy9999
    Recently I started using Tmux, but I noticed that it causes a strange Emacs glitch. When I open source code for elisp or haskell, the comments aren't highlighted. Only the comment sign is (; in case of elisp, -- in case of haskell). The rest of the commented line is in normal colour. When I run Emacs outside of Tmux everything works as expected - the whole commented line is highlighted in a colour denoting a comment. Any ideas why this is happening? SOLUTION: Based on Stefan's comment I added this to my .emacs file: (custom-set-variables (custom-set-faces '(font-lock-comment-face ((((class color) (min-colors 8) (background dark)) (:foreground "red")))))) Now the comments are displayed in red, just like comment delimiters.

    Read the article

  • (PHP) User is being forced to RE-LOGIN after trying to do something on an admin page

    - by hatorade
    I have created an admin panel for a client in PHP, which requires a login. Here is the code at the top of the admin page requiring the user to be logged in: admin.php <?php session_start(); require("_lib/session_functions.php"); require("_lib/db.php"); db_connect(); //if the user has not logged in if(!isLoggedIn()) { header('Location: login_form.php'); die(); } ?> Obviously, the if statement is what catches them and forces them to log in. Here is the code on the resulting login page: login_form.php <form name="login" action="login.php" method="post"> Username: <input type="text" name="username" /> Password: <input type="password" name="password" /> <input type="submit" value="Login" /> </form> Which posts info to this controller page: login.php <?php session_start(); //must call session_start before using any $_SESSION variables include '_lib/session_functions.php'; $username = $_POST['username']; $password = $_POST['password']; include '_lib/db.php'; db_connect(); // Connect to the DB $username = mysql_real_escape_string($username); $query = "SELECT password, salt FROM users WHERE username = '$username';"; $result = mysql_query($query); if(mysql_num_rows($result) < 1) //no such user exists { header('Location: login_form.php?login=fail'); die(); } $userData = mysql_fetch_array($result, MYSQL_ASSOC); db_disconnect(); $hash = hash('sha256', $password . $userData['salt']); if($hash != $userData['password']) //incorrect password { header('Location: login_form.php?login=fail'); die(); } else { validateUser(); //sets the session data for this user } header('Location: admin.php'); ?> and the session functions page that provides login functions contains this: session_functions.php <?php function validateUser() { session_regenerate_id (); //this is a security measure $_SESSION['valid'] = 1; $_SESSION['userid'] = $username; } function isLoggedIn() { if($_SESSION['valid']) return true; return false; } function logout() { $_SESSION = array(); //destroy all of the session variables if (ini_get("session.use_cookies")) { $params = session_get_cookie_params(); setcookie(session_name(), '', time() - 42000, $params["path"], $params["domain"], $params["secure"], $params["httponly"] ); } session_destroy(); } ?> I grabbed the sessions_functions.php code of an online tutorial, so it could be suspicious. Any ideas why the user logs in to the admin panel, tries to do something, is forced to re-login, and THEN is allowed to do stuff like normal in the admin panel?

    Read the article

  • Bug in CF9: values for unique struct keys referenced and overwritten by other keys.

    - by Gin Doe
    We've run into a serious issue with CF9 wherein values for certain struct keys can be referenced by other keys, despite those other keys never being set. See the following examples: Edit: Looks like it isn't just something our servers ate. This is Adobe bug-track ticket 81884: http://cfbugs.adobe.com/cfbugreport/flexbugui/cfbugtracker/main.html#bugId=81884. <cfset a = { AO = "foo" } /> <cfset b = { AO = "foo", B0 = "bar" } /> <cfoutput> The following should throw an error. Instead both keys refer to the same value. <br />Struct a: <cfdump var="#a#" /> <br />a.AO: #a.AO# <br />a.B0: #a.B0# <hr /> The following should show a struct with 2 distinct keys and values. Instead it contains a single key, "AO", with a value of "bar". <br />Struct b: <cfdump var="#b#" /> This is obviously a complete show-stopper for us. I'd be curious to know if anyone has encountered this or can reproduce this in their environment. For us, it happens 100% of the time on Apache/CF9 running on Linux, both RH4 and RH5. We're using the default JRun install on Java 1.6.0_14. To see the extent of the problem, we ran a quick loop to find other naming sequences that are affected and found hundreds of matches for 2 letter key names. A similar loop found more conflicts in 3 letter names. <cfoutput>Testing a range of affected key combinations. This found hundreds of cases on our platform. Aborting after 50 here.</cfoutput> <cfscript> teststring = "ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"; stringlen = len(teststring); matchesfound = 0; matches = ""; for (i1 = 1; i1 <= stringlen; i1++) { symbol1 = mid(teststring, i1, 1); for (i2 = 1; i2 <= stringlen; i2++) { teststruct = structnew(); symbol2 = mid(teststring, i2, 1); symbolwhole = symbol1 & symbol2; teststruct[ symbolwhole ] = "a string"; for (q1 = 1; q1 <= stringlen; q1++) { innersymbol1 = mid(teststring, q1, 1); for (q2 = 1; q2 <= stringlen; q2++) { innersymbol2 = mid(teststring, q2, 1); innersymbolwhole = innersymbol1 & innersymbol2; if ((i1 != q1 || i2 != q2) && structkeyexists(teststruct, innersymbolwhole)) { // another affected pair of keys! writeoutput ("<br />#symbolwhole# = #innersymbolwhole#"); if (matchesfound++ > 50) { // we've seen enough abort; } } } } } } </cfscript> And edit again: This doesn't just affect struct keys but names in the variables scope as well. At least the variables scope has the presence of mind to throw an error, "can't load a null": <cfset test_b0 = "foo" /> <cfset test_ao = "bar" /> <cfoutput> test_b0: #test_b0# <br />test_ao: #test_ao# </cfoutput>

    Read the article

  • MySQL port 3306 became filtered when configured with Keepalived on Ubuntu server 12.04 lts

    - by Ludwig
    I'm configuring two load balancer (lb01 & lb02) with keepalived for my two mysql server (db01 & db02) with standard port 3306. There is virtual ip address (192.168.205.10) to access it also act as failover, but somehow the web server in the front can't access this mysql server using vip. Here is my config: Keepalived: Only the mysql part that i added here. LB01: virtual_server 192.168.205.10 3306 { delay_loop 6 lb_algo rr lb_kind DR protocol TCP real_server 192.168.205.4 3306 { weight 10 TCP_CHECK { connect_port 3306 connect_timeout 2 } } } LB02: virtual_server 192.168.205.10 3306 { delay_loop 6 lb_algo rr lb_kind DR protocol TCP real_server 192.168.205.6 3306 { weight 10 TCP_CHECK { connect_port 3306 connect_timeout 2 } } } I already comment out the "bind-address=127.0.0.1" part in both server my.cnf. Also, remove all the firewall prog from my ubuntu server (ufw or iptables). Any help? thanks.

    Read the article

  • Nginx Rate Limiting by Referrer?

    - by SteveEdson
    I've successfully set up rate limiting on IP addresses like so, limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s; But I was wondering if its possible to do the same on referrers? For example, if a site gets placed in an iframe on a third party site, which generates too much traffic to handle. I can't find any nginx variables for the referrer anywhere. Is this possible? Or can the solution be achieved in a different way? Thanks.

    Read the article

  • How to use ssl_verify_client=ON on one virtual server and ssl_verify_client=OFF on another?

    - by Alexander Artemenko
    I want to force ssl client verification for on of my virtual hosts. But get "No required SSL certificate was sent" error, trying to GET something from it. Here are my test configs: # defaults ssl_certificate /etc/certs/server.cer; ssl_certificate_key /etc/certs/privkey-server.pem; ssl_client_certificate /etc/certs/allcas.pem; server { listen 1443 ssl; server_name server1.example.com; root /tmp/root/server1; ssl_verify_client off; } server { listen 1443 ssl; server_name server2.example.com; root /tmp/root/server2; ssl_verify_client on; } First server replies with 200 http code, but second returns "400 Bad Request, No required SSL certificate was sent, nginx/1.0.4". Probably, it is implossible to use ssl_verify_client on the same IP? Should I bind these servers to different IPs, will it solve my problem?

    Read the article

  • ft_stopword_file not picked up

    - by Alex Holsgrove
    I have a VPS server with a company called Webfusion. I want to remove some or all of the FULLTEXT stopwords because some specific words needs to be searchable with my DB content. I opened /etc/mysql/my.cnf and added the line ft_stopword_file="". I restarted the mysql service, ran a repair table and then tried my MATCH query with no success. I ran SHOW VARIABLES LIKE 'ft_%' and it simply shows (built-in) next to the stopword file. I am running WAMP on my workstation, and whilst I realise this isn't configured the same as a commercial VPS, the above method worked just fine. Couple someone please offer some guidance?

    Read the article

  • Mount /tmp in /mnt on EC2

    - by Claudio Poli
    I was wondering what is the best way to mount the /tmp endpoint in the ephemeral storage /mnt on an EC2 instance and give the ubuntu user default write permissions. Some suggest editing /etc/rc.local this way: mkdir -p /mnt/tmp && mount --bind -o nobootwait /mnt/tmp /tmp However that doesn't work for me. I tried editing the fstab: /dev/xvdb /tmp auto defaults,nobootwait,comment=cloudconfig 0 2 and giving it a umask=0777, however it doesn't work because of cloudconfig. I'm using Ubuntu 12.04.

    Read the article

  • Should I reinstall my production server from 32 bit to 64 bit if it has 16GB of RAM?

    - by Alexandru Trandafir Catalin
    I have a production server with 16GB of RAM that came with a 32bit CentOs installation. The website hosted on this server is increasing its traffic every day and I am having some MySQL trouble so I tried to check the MySQL configuration with mysqltuner.pl and gave me the following messages: [!!] Switch to 64-bit OS - MySQL cannot currently use all of your RAM *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** So my question is: can I survive with the 32 bit? Or I will have to install the 64 bit OS? Thanks.

    Read the article

  • How can I use `SetEnvIf` to clear an Apache2 environment variable?

    - by Jamie
    In my apache2 configuration I've got these lines: SetEnv log_everything # Create the environment variables based on access requests SetEnvIf Request_URI "^/orders/.*$" download_access !log_everything SetEnvIf Request_URI "^/download/.*$" download_access !log_everything SetEnvIf Request_URI "^/wg/.*$" wg_1x1_access !log_everything # Log the accesses using the generated environment variable as conditionals. CustomLog ${APACHE_LOG_DIR}/download.log combined env=download_access CustomLog ${APACHE_LOG_DIR}/wg.log combined env=wg_1x1_access RewriteEngine on RewriteRule "^/wg/.+$" "/wg/1x1.gif" ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined env=log_everything Which currently logs all the "download" and "orders" requests to "download.log" and "wg" requests to "wg.log", but everything is also going to access.log. How can I configure this so that "wg" and "download/orders" requests won't be duplicated in access.log?

    Read the article

  • How to get a service to listen on port 80 on Windows Server 2003

    - by Miky D
    I've coded a custom windows service that listens on TCP port 80 but when I try to install it on a Windows Server 2003 machine it fails to start because some other service is already listening on that port. So far I've disabled the IIS Admin service and the HTTP SSL service but no luck. When I run netstat -a -n -o | findstr 0.0:80 it gives me the process id 4 as the culprit, but when I look at the running processes that process id points to the "System" process. What can I do to get the System process to stop listening on port 80 and get my service to listen instead? P.S. I should point out that the service runs fine if I install it on my Windows XP or Windows 7 development boxes. Also, I should specify that this has nothing to do with it being a service. I've tried starting a regular application that attempts to bind to port 80 on the Windows Server 2003 with the same outcome - it fails because another application is already bound to that port.

    Read the article

  • username and password for rsync in script

    - by sims
    I'm creating a cron job to keep two dirs in sync. I'm using rsync. I'm running an rsync daemon. I read the manual and it says: RSYNC_PASSWORD Setting RSYNC_PASSWORD to the required password allows you to run authenticated rsync connections to an rsync daemon without user intervention. Note that this does not supply a password to a shell transport such as ssh. USER or LOGNAME The USER or LOGNAME environment variables are used to determine the default username sent to an rsync daemon. If neither is set, the username defaults to 'nobody' I have something like: #!/bin/bash USER=name RSYNC_PASSWORD=pass DEST="server::module" /usr/bin/rsync -rltvvv . $DEST I also tried exporting (dangerous, I know) USER and RSYNC_PASSWORD. I also tried with LOGNAME. Nothing works. Am I doing this correctly?

    Read the article

  • test if master dns has transfered copy to slave

    - by su55
    Hello, I setup my master and slave using FreeBSD. I'm currently running the Bind 9.X version, so far everything is working successfully. Just one small problem. I can't get the master copy of my DNS to transfer it to the slave server. I included transfer-allow {192.168.1.111;}; // this is the slave server's IP I ran the rndc reload command to check but I don't see the copy in the /etc/named/master/? Any help would be appreciated and if you would like the layout of my DNS, I can provide that too.

    Read the article

  • Calculate new position of player

    - by user1439111
    Edit: I will summerize my question since it is very long (Thanks Len for pointing it out) What I'm trying to find out is to get a new position of a player after an X amount of time. The following variables are known: - Speed - Length between the 2 points - Source position (X, Y) - Destination position (X, Y) How can I calculate a position between the source and destion with these variables given? For example: source: 0, 0 destination: 10, 0 speed: 1 so after 1 second the players position would be 1, 0 The code below works but it's quite long so I'm looking for something shorter/more logical ====================================================================== I'm having a hard time figuring out how to calculate a new position of a player ingame. This code is server sided used to track a player(It's a emulator so I don't have access to the clients code). The collision detection of the server works fine I'm using bresenham's line algorithm and a raycast to determine at which point a collision happens. Once I deteremined the collision I calculate the length of the path the player is about to walk and also the total time. I would like to know the new position of a player each second. This is the code I'm currently using. It's in C++ but I am porting the server to C# and I haven't written the code in C# yet. // Difference between the source X - destination X //and source y - destionation Y float xDiff, yDiff; xDiff = xDes - xSrc; yDiff = yDes - ySrc; float walkingLength = 0.00F; float NewX = xDiff * xDiff; float NewY = yDiff * yDiff; walkingLength = NewX + NewY; walkingLength = sqrt(walkingLength); const float PI = 3.14159265F; float Angle = 0.00F; if(xDes >= xSrc && yDes >= ySrc) { Angle = atanf((yDiff / xDiff)); Angle = Angle * 180 / PI; } else if(xDes < xSrc && yDes >= ySrc) { Angle = atanf((-xDiff / yDiff)); Angle = Angle * 180 / PI; Angle += 90.00F; } else if(xDes < xSrc && yDes < ySrc) { Angle = atanf((yDiff / xDiff)); Angle = Angle * 180 / PI; Angle += 180.00F; } else if(xDes >= xSrc && yDes < ySrc) { Angle = atanf((xDiff / -yDiff)); Angle = Angle * 180 / PI; Angle += 270.00F; } float WalkingTime = (float)walkingLength / (float)speed; bool Done = false; float i = 0; while(i < walkingLength) { if(Done == true) { break; } if(WalkingTime >= 1000) { Sleep(1000); i += speed; WalkTime -= 1000; } else { Sleep(WalkTime); i += speed * WalkTime; WalkTime -= 1000; Done = true; } if(Angle >= 0 && Angle < 90) { float xNew = cosf(Angle * PI / 180) * i; float yNew = sinf(Angle * PI / 180) * i; float NewCharacterX = xSrc + xNew; float NewCharacterY = ySrc + yNew; } I have cut the last part of the loop since it's just 3 other else if statements with 3 other angle conditions and the only change is the sin and cos. The given speed parameter is the speed/second. The code above works but as you can see it's quite long so I'm looking for a new way to calculate this. btw, don't mind the while loop to calculate each new position I'm going to use a timer in C# Thank you very much

    Read the article

  • Status code in nginx try_files directive

    - by Hamish
    Is it possible to use the current status code as a parameter in try_files? For example, we try to provide a host specific 503 static response, or a server-wide fallback if it wasn't found: error_page 503 @error503; location @error503 { root /path_to_static_root/; try_files /$host/503.html /503.html =503; } There are a number of these directives, so it would be convenient to do something like: error_page 404 @error error_page 500 @error error_page 503 @error location @error { root /path_to_static_root/; try_files /$host/$status.html /$status.html =$status; } But the Variables documentation doesn't list anything that we could use to do this. Is it possible, or is there an alternative way to do this?

    Read the article

  • how can I git-revise configs in my /etc/ dir? (sudo has different keys..)

    - by Dean Rather
    I'd like to keep some of the folders in my /etc/ dir git-revised, cause I'm quite new to server administration and am constantly messing around in my /etc/nginx/ and /etc/bind/ directories. I've heard of people git-revising their either /etc/ directories, but that seems a bit like overkill, as at this point I'm only messing in those 2 subdirectories. The problem I'm having is that if I sudo my git operations, I don't have the right pubkeys to push to my remote repo (bitbucket). But if I don't sudo, I need to mess around with all the permissions (again, not very pro at this). Does anyone know best practices for managing their configs? or how I should solve this problem? Thanks, Dean. PS. It's Ubuntu 12.04, Git, nginx, bind9, amazon aws, bitbucket...

    Read the article

  • Ubuntu 11.04 and OpenLDAP - where is the config?

    - by Tom SKelley
    I've been asked to setup a multimaster LDAP environment on Ubuntu 11.04 - instead of a single master server. I cloned the master server and recreated it into two VMs. I am trying to follow the instructions on the OpenLDAP documentation here: http://www.openldap.org/doc/admin24/replication.html and it talks about modifying the cn=config tree within LDAP. The subdirectory tree appears to be there at: /etc/ldap/slapd.d/ and a slapcat -b cn=config drops out a load of config information. When I try to connect using a browser and the admin bind credentials: ldapsearch -D '<adminDN>' -w <password> -b 'cn=config' I get: # extended LDIF # # LDAPv3 # base <> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # search result search: 2 result: 32 No such object I don't see the config context when I connect via an LDAP browser either. I'm sure I'm missing something, but I can't see what it is!

    Read the article

  • How can I fix this configure error?

    - by balor123
    I'm trying to build mosh from source on a SUSE10 machine and am getting the following error: checking for protobuf... no configure: error: Package requirements (protobuf) were not met: No package 'protobuf' found Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables protobuf_CFLAGS and protobuf_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. I downloaded the source to protobuf and installed it in a custom path as well. I'm not using a package manager for any of this and cannot for various reasons outside the scope of the question. I added that custom path to my PATH and rehashed. Typically, this is enough for configure but in this case its not doing the trick. I added the prefix for protobuf to PKG_CONFIG_PATH but am still hitting this error. What should I do next to get past this error?

    Read the article

  • Can't set up IIS Web Server on Server 2008 x64 correctly (what have I missed?)

    - by balexandre
    Using a VM I installed Windows 2008 Server x64 and as the image below shows, added the IIS Role full image and assigned all role features of IIS full image But if I have an ASP.NET (aspx) page that does (C#) Session["test-session"] = "A"; and read in other page I always get nothing! NOTE: I do have an entire ASP.NET web application, the example above is to be succinct and explicit on what is the problem I'm facing. Can anyone know what do I have to do to the Server, so I can use the Session variables? All help is greatly appreciated, Thank you

    Read the article

  • Start tomcat webapp with root privileges

    - by Hagay Myr
    I built a webapp that uses libpcap (via jpcap). In order to be able to get the network interfaces list or to bind to a network interface, the application (in this case a webaap that runs from tomcat server) must be running with root privileges. During development I simply ran Eclipse with root privileges (sudo eclipse) and my webapp worked just fine with Eclipse's local tomcat server. However, when I try to deploy my webapp to the "real" tomcat server, it isn't working. I Also tried to start the tomcat6 service with sudo and changed the TOMCAT6_USER definition (defined in /etc/init.d/tomcat6) from "tomcat6" to "root" but it made no difference. What should I do to make it work?

    Read the article

  • Why is my interface is not showing when i run the project?

    - by Nubkadiya
    i have configured the Sphinx and i have used Main thread to do the recognition part. so that i can avoid the buttons. so currently my design is when the application runs it will check any voice recognition and prompt in the labels. but when i run the project it dont display the interface of the application. only the frame shows. here is the code. if you guys can provide me with any solution for this. it will be great. /* * To change this template, choose Tools | Templates * and open the template in the editor. */ /* * FinalMainWindow.java * * Created on May 17, 2010, 11:22:29 AM */ package FYP; import edu.cmu.sphinx.frontend.util.Microphone; import edu.cmu.sphinx.recognizer.Recognizer; import edu.cmu.sphinx.result.Result; import edu.cmu.sphinx.util.props.ConfigurationManager; //import javax.swing.; //import java.io.; public class FinalMainWindow extends javax.swing.JFrame{ Recognizer recognizer; private void allocateRecognizer() { ConfigurationManager cm; cm = new ConfigurationManager("helloworld.config.xml"); this.recognizer = (Recognizer) cm.lookup("recognizer"); this.recognizer.allocate(); Microphone microphone = (Microphone) cm.lookup("microphone");// TODO add // your if (!microphone.startRecording()) { // System.out.println("Cannot start microphone."); //this.jlblDest.setText("Cannot Start Microphone"); // this.jprogress.setText("Cannot Start Microphone"); System.out.println("Cannot Start Microphone"); this.recognizer.deallocate(); System.exit(1); } } boolean allocated; // property file eka....... //code.google.com private void voiceMajorInput() { if (!allocated) { this.allocateRecognizer(); allocated = true; } Result result = recognizer.recognize(); if (result != null) { String resultText = result.getBestFinalResultNoFiller(); System.out.println("Recognized Result is " +resultText); this.jhidden.setText(resultText); } } /** Creates new form FinalMainWindow */ public FinalMainWindow() { initComponents(); } /** This method is called from within the constructor to * initialize the form. * WARNING: Do NOT modify this code. The content of this method is * always regenerated by the Form Editor. */ @SuppressWarnings("unchecked") // <editor-fold defaultstate="collapsed" desc="Generated Code"> private void initComponents() { jhidden = new javax.swing.JLabel(); setDefaultCloseOperation(javax.swing.WindowConstants.EXIT_ON_CLOSE); jhidden.setText("jLabel1"); javax.swing.GroupLayout layout = new javax.swing.GroupLayout(getContentPane()); getContentPane().setLayout(layout); layout.setHorizontalGroup( layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGroup(layout.createSequentialGroup() .addGap(51, 51, 51) .addComponent(jhidden) .addContainerGap(397, Short.MAX_VALUE)) ); layout.setVerticalGroup( layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGroup(layout.createSequentialGroup() .addGap(45, 45, 45) .addComponent(jhidden) .addContainerGap(293, Short.MAX_VALUE)) ); pack(); }// </editor-fold> /** * @param args the command line arguments */ public static void main(String args[]) { java.awt.EventQueue.invokeLater(new Runnable() { public void run() { // new FinalMainWindow().setVisible(true); FinalMainWindow mw = new FinalMainWindow(); mw.setVisible(true); mw.voiceMajorInput(); new FinalMainWindow().setVisible(true); } }); } // Variables declaration - do not modify private javax.swing.JLabel jhidden; // End of variables declaration }

    Read the article

  • Ubuntu and mysql server. Something isnt allowing me to connect

    - by acidzombie24
    I have a question about mysql settings http://serverfault.com/questions/94054/remote-connections-and-mysql-on-ubuntu/94088#94088 now i want to figure out why i cannot connect. I made sure bind-address was commented out. I can ping the server within the VM but i cannot ping it from within the VM using mysqladmin --protocol=tcp --host=self_ip ping. I also followed along and check if my ports were open and they look like they are. I setup samba on that VM and can access that with no problem as well. It looks like ubuntu does not have a firewall either (i figured this out before) so i am stumped why the server isnt allowing my connection. Apparently the config file works on another person side http://www.pastie.org/742545 I am using Ubuntu 6.06 LTS just because of 'support' reasons. So hopefully this will be 'easy'?

    Read the article

  • Socket in C: recv overwrite a char[]

    - by Possa
    Hi all, I'm trying to make a little client-server script like many others that I've done in the past. But in this one I have a problem. It is better if I post the code and the output it give me. Code: #include <mysql.h> //not important now #include <stdlib.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <netdb.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <signal.h> #include <string.h> //constant definition #define SERVER_PORT 2121 #define LINESIZE 21 //global var definition char victim_ip[LINESIZE], file_write[LINESIZE], hacker_ip[LINESIZE]; //function void leggi (int); //not use now for debugging purpose //void scriviDB (); //not important now main () { int sock, client_len, fd; struct sockaddr_in server, client; // transport end point if((sock = socket(AF_INET, SOCK_STREAM, 0)) == -1) { perror("system call socket fail"); exit(1); } server.sin_family = AF_INET; server.sin_addr.s_addr = inet_addr("10.10.10.1"); server.sin_port = htons(SERVER_PORT); // binding address at transport end point if (bind(sock, (struct sockaddr *)&server, sizeof server) == -1) { perror("system call bind fail"); exit(1); } //fprintf(stderr, "Server open: listening.\n"); listen(sock, 5); /* managae client connection */ while (1) { client_len = sizeof(client); if ((fd = accept(sock, (struct sockaddr *)&client, &client_len)) < 0) { perror("accepting connection"); exit(1); } strcpy(hacker_ip, inet_ntoa(client.sin_addr)); printf("1 %s\n", hacker_ip); //debugging purpose //leggi(fd); ////////////////////////// //receive client recv(fd, victim_ip, LINESIZE, 0); victim_ip[sizeof(victim_ip)] = '\0'; printf("2 %s\n", hacker_ip); //debugging purpose recv(fd, file_write, LINESIZE, 0); file_write[sizeof(file_write)] = '\0'; printf("3 %s\n", hacker_ip); //debugging purpose printf("%s@%s for %s\n", file_write, victim_ip, hacker_ip); //send to client send(fd, hacker_ip, 40, 0); //now is hacker_ip for debug ///////////////////////// close(fd); }//end while exit(0); } //end main Client send string: ./send -i 10.10.10.4 -f filename.ext so the script send -i (IP) and -f (FILE) at the server. Here's my output server side: 1 10.10.10.6 2 10.10.10.6 3 [email protected] for As you can see the printf(3) and the printf(ip,file,ip) fail. I don't know how and where but someone overwrite my hacker_ip string. Thanks for your help! :)

    Read the article

  • What is "read operations inside transactions can't allow failover" ?

    - by Kenyth
    From time to time I got the following exception message on GAE for my GAE/J app. I searched with Google, no relevant results were found. Does anyone know about this? Thanks in advance for any response! The exception message is as below: Nested in org.springframework.orm.jpa.JpaSystemException: Illegal argument; nested exception is javax.persistence.PersistenceException: Illegal argument: java.lang.IllegalArgumentException: read operations inside transactions can't allow failover at com.google.appengine.api.datastore.DatastoreApiHelper.translateError(DatastoreApiHelper.java: 34) at com.google.appengine.api.datastore.DatastoreApiHelper.makeSyncCall(DatastoreApiHelper.java: 67) at com.google.appengine.api.datastore.DatastoreServiceImpl $1.run(DatastoreServiceImpl.java:128) at com.google.appengine.api.datastore.TransactionRunner.runInTransaction(TransactionRunner.java: 30) at com.google.appengine.api.datastore.DatastoreServiceImpl.get(DatastoreServiceImpl.java: 111) at com.google.appengine.api.datastore.DatastoreServiceImpl.get(DatastoreServiceImpl.java: 84) at com.google.appengine.api.datastore.DatastoreServiceImpl.get(DatastoreServiceImpl.java: 77) at org.datanucleus.store.appengine.RuntimeExceptionWrappingDatastoreService.get(RuntimeExceptionWrappingDatastoreService.java: 53) at org.datanucleus.store.appengine.DatastorePersistenceHandler.get(DatastorePersistenceHandler.java: 94) at org.datanucleus.store.appengine.DatastorePersistenceHandler.get(DatastorePersistenceHandler.java: 106) at org.datanucleus.store.appengine.DatastorePersistenceHandler.fetchObject(DatastorePersistenceHandler.java: 464) at org.datanucleus.state.JDOStateManagerImpl.loadUnloadedFieldsInFetchPlan(JDOStateManagerImpl.java: 1627) at org.datanucleus.state.JDOStateManagerImpl.loadFieldsInFetchPlan(JDOStateManagerImpl.java: 1603) at org.datanucleus.ObjectManagerImpl.performDetachAllOnCommitPreparation(ObjectManagerImpl.java: 3192) at org.datanucleus.ObjectManagerImpl.preCommit(ObjectManagerImpl.java: 2931) at org.datanucleus.TransactionImpl.internalPreCommit(TransactionImpl.java: 369) at org.datanucleus.TransactionImpl.commit(TransactionImpl.java:256) at org.datanucleus.jpa.EntityTransactionImpl.commit(EntityTransactionImpl.java: 104) at org.datanucleus.store.appengine.jpa.DatastoreEntityTransactionImpl.commit(DatastoreEntityTransactionImpl.java: 55) at name.kenyth.playtweets.service.Tx.run(Tx.java:39) at name.kenyth.playtweets.web.controller.TwitterApiController.persistStatus(TwitterApiController.java: 309) at name.kenyth.playtweets.web.controller.TwitterApiController.processStatusesForWebCall(TwitterApiController.java: 271) at name.kenyth.playtweets.web.controller.TwitterApiController.getHomeTimelineUpdates_aroundBody0(TwitterApiController.java: 247) at name.kenyth.playtweets.web.controller.TwitterApiController $AjcClosure1.run(TwitterApiController.java:1) at name.kenyth.playtweets.web.refine.AuthenticationEnforcement.ajc $around$name_kenyth_playtweets_web_refine_AuthenticationEnforcement $2$439820b7proceed(AuthenticationEnforcement.aj:1) at name.kenyth.playtweets.web.refine.AuthenticationEnforcement.ajc $around$name_kenyth_playtweets_web_refine_AuthenticationEnforcement $2$439820b7(AuthenticationEnforcement.aj:168) at name.kenyth.playtweets.web.controller.TwitterApiController.getHomeTimelineUpdates(TwitterApiController.java: 129) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Method.java:43) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.doInvokeMethod(HandlerMethodInvoker.java: 710) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.invokeHandlerMethod(HandlerMethodInvoker.java: 167) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.invokeHandlerMethod(AnnotationMethodHandlerAdapter.java: 414) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.handle(AnnotationMethodHandlerAdapter.java: 402) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java: 771) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java: 716) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java: 647) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java: 552) at javax.servlet.http.HttpServlet.service(HttpServlet.java:693) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java: 511) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java: 390) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java: 216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java: 182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java: 765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java: 418) at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:327) at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:126) at org.tuckey.web.filters.urlrewrite.NormalRewrittenUrl.doRewrite(NormalRewrittenUrl.java: 195) at org.tuckey.web.filters.urlrewrite.RuleChain.handleRewrite(RuleChain.java: 159) at org.tuckey.web.filters.urlrewrite.RuleChain.doRules(RuleChain.java: 141) at org.tuckey.web.filters.urlrewrite.UrlRewriter.processRequest(UrlRewriter.java: 90) at org.tuckey.web.filters.urlrewrite.UrlRewriteFilter.doFilter(UrlRewriteFilter.java: 417) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java: 71) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java: 76) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java: 88) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java: 76) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.utils.servlet.ParseBlobUploadFilter.doFilter(ParseBlobUploadFilter.java: 97) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.runtime.jetty.SaveSessionFilter.doFilter(SaveSessionFilter.java: 35) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java: 43) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java: 388) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java: 216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java: 182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java: 765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java: 418) at com.google.apphosting.runtime.jetty.AppVersionHandlerMap.handle(AppVersionHandlerMap.java: 238) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java: 152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java: 542) at org.mortbay.jetty.HttpConnection $RequestHandler.headerComplete(HttpConnection.java:923) at com.google.apphosting.runtime.jetty.RpcRequestParser.parseAvailable(RpcRequestParser.java: 76) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at com.google.apphosting.runtime.jetty.JettyServletEngineAdapter.serviceRequest(JettyServletEngineAdapter.java: 135) at com.google.apphosting.runtime.JavaRuntime.handleRequest(JavaRuntime.java: 250) at com.google.apphosting.base.RuntimePb$EvaluationRuntime $6.handleBlockingRequest(RuntimePb.java:5838) at com.google.apphosting.base.RuntimePb$EvaluationRuntime $6.handleBlockingRequest(RuntimePb.java:5836) at com.google.net.rpc.impl.BlockingApplicationHandler.handleRequest(BlockingApplicationHandler.java: 24) at com.google.net.rpc.impl.RpcUtil.runRpcInApplication

    Read the article

  • Configure all hosts, then create a list of the config for all hosts?

    - by AME
    I deployed a huge number of hosts with Ansible - which did work very nice. Each host got its individual settings and configuration. Now I'd like to generate a config file for another system that uses these hosts. For it, I need for every host a part of the generated configuration (the one that configures the database). Here is an example of the situation with two hosts having different configuration and the other system that uses a part of the Ansible-generated configuration: host1 ansible configured dbA host2 ansible configured dbQ The other system: host1 = dbA host2 = dbQ The values are computed differently (dbQ instead of dbB for host2 for example) if it belongs in a different cluster and so on, making it unpractical to just read out host configuration from the host_vars. I believe I would need to iterate over the hosts and let Ansible figure out the computed values for the variables like it would when deploying, but I do not know how to put that result in one template. Please advise :)

    Read the article

< Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >