Search Results

Search found 244 results on 10 pages for 'luca farber'.

Page 4/10 | < Previous Page | 1 2 3 4 5 6 7 8 9 10  | Next Page >

  • Strange behavior of DDD debugger in Ubuntu

    - by Alex Farber
    I installed DDD debugger in Ubuntu and trying to work with it. It looks like DDD UI doesn't work properly with Ubuntu desktop environment. Edit boxes are almost unusable: sometimes they accept keyboard input, most of times input is ignored. Internal resizing panes are not working. Is there some way to get DDD UI working properly? The same behavior is in Ubuntu 9.10 32 bit, and 10.4 64 bit, so this is not Ubuntu version issue.

    Read the article

  • makefile from Linux doesn't work in OpenSolaris

    - by Alex Farber
    In OpenSolaris OS, when I run makefile generated by Eclipse CDT on the Linux OS, I get an error on the first -include line. The same error was in FreeBSD, and was solved by executing gmake instead of make. In OpenSolaris (just installed) gmake doesn't work (command not found). What package should I install and how exactly, to build Linux-generated C++ project in OpenSolaris?

    Read the article

  • Building Boost with LSB C++ Compiler

    - by Alex Farber
    I want to build my program with LSB C++ Compiler from the Linux Standard Base http://www.linuxfoundation.org/collaborate/workgroups/lsb. Program depends on the Boost library, built with gcc 4.4 version. Compilation fails. Is it possible to build the Boost library with LSB C++ Compiler? Alternatively, is it possible to build the Boost library with some old gcc version, what version is recommended? My final goal is to get my executable and third-party Boost libraries running on most Linux distributions. Generally, what can be done to get better binary compatibility for Linux distributions, developing C++ closed-source application depending on the Boost library?

    Read the article

  • PostgreSQL: return select count(*) from old_ids;

    - by Alexander Farber
    Hello, please help me with 1 more PL/pgSQL question. I have a PHP-script run as daily cronjob and deleting old records from 1 main table and few further tables referencing its "id" column: create or replace function quincytrack_clean() returns integer as $BODY$ begin create temp table old_ids (id varchar(20)) on commit drop; insert into old_ids select id from quincytrack where age(QDATETIME) > interval '30 days'; delete from hide_id where id in (select id from old_ids); delete from related_mks where id in (select id from old_ids); delete from related_cl where id in (select id from old_ids); delete from related_comment where id in (select id from old_ids); delete from quincytrack where id in (select id from old_ids); return select count(*) from old_ids; end; $BODY$ language plpgsql; And here is how I call it from the PHP script: $sth = $pg->prepare('select quincytrack_clean()'); $sth->execute(); if ($row = $sth->fetch(PDO::FETCH_ASSOC)) printf("removed %u old rows\n", $row['count']); Why do I get the following error? SQLSTATE[42601]: Syntax error: 7 ERROR: syntax error at or near "select" at character 9 QUERY: SELECT select count(*) from old_ids CONTEXT: SQL statement in PL/PgSQL function "quincytrack_clean" near line 23 Thank you! Alex

    Read the article

  • PostgreSQL: keep a certain number of records in a table

    - by Alexander Farber
    Hello, I have an SQL-table holding the last hands received by a player in card game. The hand is represented by an integer (32 bits == 32 cards): create table pref_hand ( id varchar(32) references pref_users, hand integer not NULL check (hand > 0), stamp timestamp default current_timestamp ); As the players are playing constantly and that data isn't important (just a gimmick to be displayed at player profile pages) and I don't want my database to grow too quickly, I'd like to keep only up to 10 records per player id. So I'm trying to declare this PL/PgSQL procedure: create or replace function pref_update_game(_id varchar, _hand integer) returns void as $BODY$ begin delete from pref_hand offset 10 where id=_id order by stamp; insert into pref_hand (id, hand) values (_id, _hand); end; $BODY$ language plpgsql; but unfortunately this fails with: ERROR: syntax error at or near "offset" because delete doesn't support offset. Does anybody please have a better idea here? Thank you! Alex

    Read the article

  • How to install the program depending on libstdc++ library

    - by Alex Farber
    My program is written in C++, using GCC on Ubuntu 9.10 64 bit. If depends on /usr/lib64/libstdc++.so.6 which actually points to /usr/lib64/libstdc++.so.6.0.13. Now I copy this program to virgin Ubuntu 7.04 system and try to run it. It doesn't run, as expected. Then I add to the program directory the following files: libstdc++.so.6.0.13 libstdc++.so.6 (links to libstdc++.so.6.0.13) and execute command: LD_LIBRARY_PATH=. ./myprogram Now everything is OK. The question: how can I write installation script for such program? myprogram file itself should be placed to /usr/local/bin. What can I do with dependencies? For example, on destination computer, /usr/lib64/libstdc++.so.6 link points to /usr/lib64/libstdc++.so.6.0.8. What can I do with this? Note: the program is closed-source, I cannot provide source code and makefile.

    Read the article

  • Linux directories

    - by Alex Farber
    I am writing installation script for my program, which is supposed to run on Linux/Unix OS. What is the default directory for the following files: Executable files (programs). Program should be executed by typing its name from the command line. Shared libraries. Third-party shared libraries (the program is not open source, so I need to redistribute third-party libraries). Read-only program configuration files for all users. Configuration data available for read/write access for all users.

    Read the article

  • Get home directory in Linux, C++

    - by Alex Farber
    I need a way to get user home directory in C++ program running on Linux. If the same code works on Unix, it would be nice. I don't want to use HOME environment value. AFAIK, root home directory is /root. Is it OK to create some files/folders in this directory, in the case my program is running by root user?

    Read the article

  • How to enable core dump in my Linux C++ program

    - by Alex Farber
    My program is written in C++. compiled with gcc, using -g3 -O0 -ggdb flags. When it crashes, I want to open its core dump. Does it create core dump file, or I need to do something to enable core dump creation, in the program itself, or on computer where it is executed? Where this file is created, and what is its name?

    Read the article

  • Dovecot: no auth attempts in 0 secs (IMAP protocol)

    - by Luca D'Amico
    I'm having a lot of problems configuring dovecot ony vps. I'm already able to send email using port 110 and to receive email using port 25, but I can't connect using port 993 and 995. I'm using self-signed ssl certificates. When I try to connect to 993 this error is logged: Jun 8 19:06:39 MY_HOSTNAME dovecot: imap-login: Disconnected (no auth attempts in 2 secs): user=<>, rip=MY_IP, lip=MY_VPS_IP, TLS, session=<MY_SESSION> When I try to connect to 995 here is the error log: Jun 8 19:08:17 MY_HOSTNAME dovecot: pop3-login: Disconnected (no auth attempts in 0 secs): user=<>, rip=MY_IP, lip=MY_VPS_IP, TLS: SSL_read() failed: error:14094416:SSL routines:SSL3_READ_BYTES:sslv3 alert certificate unknown: SSL alert number 46, session=<MY_SESSION> EDIT: I was able to fix this part by refreshing my mail client ssl cert. Anybody can help me please ? I'm stuck :/ Many thanks

    Read the article

  • BIND authoritative name server: SERVFAIL?

    - by Luca Tettamanti
    I have a BIND 9.6 instance that acts as a caching NS for the whole building and is also authoritative for an internal zone ("example" below): zone "example" { type master; file "example"; update-policy { grant dhcp-update subdomain example. A TXT; }; }; Due to a rogue switch we lost connectivity with the rest of the world, and the NS started answering SERVFAIL; what surprised me was that the server was also unable to respond to queries for the example domain. What is the reason of this behavior? Shouldn't the NS be able to answer since it has authoritative data? edit: The rest of the configuration is the standard one shipped with Debian: hints for the root servers and the zones for localhost and broadcast.

    Read the article

  • SSL on local sub-domain and sub-sub-domain

    - by Eduard Luca
    I have both local.domain.com and lmarket.local.domain.com pointing to my localhost from etc/hosts. The problem is that I am using XAMPP on Windows 7, and have 2 SSL VirtualHosts in my apache config, but no matter which one I access, I am taken to local.domain.com. On non-HTTPS requests all works fine, and the vhosts are basically the same. Here is the relevant part of my vhosts: <VirtualHost local.domain.com:443> DocumentRoot "C:/xampp/htdocs/local" ServerName local.domain.com ServerAdmin webmaster@localhost ErrorLog "logs/error.log" <IfModule log_config_module> CustomLog "logs/access.log" combined </IfModule> SSLEngine on SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL SSLCertificateFile "conf/ssl.crt/server.crt" SSLCertificateKeyFile "conf/ssl.key/server.key" <FilesMatch "\.(cgi|shtml|pl|asp|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory "C:/xampp/cgi-bin"> SSLOptions +StdEnvVars </Directory> BrowserMatch ".*MSIE.*" nokeepalive ssl-unclean-shutdown downgrade-1.0 force-response-1.0 CustomLog "logs/ssl_request.log" "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b" </VirtualHost> <VirtualHost lmarket.local.domain.com:443> DocumentRoot "C:/xampp/htdocs/lmarket.local" ServerName lmarket.local.domain.com ServerAdmin webmaster@localhost ErrorLog "logs/error.log" <IfModule log_config_module> CustomLog "logs/access.log" combined </IfModule> SSLEngine on SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL SSLCertificateFile "conf/ssl.crt/server.crt" SSLCertificateKeyFile "conf/ssl.key/server.key" <FilesMatch "\.(cgi|shtml|pl|asp|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory "C:/xampp/cgi-bin"> SSLOptions +StdEnvVars </Directory> BrowserMatch ".*MSIE.*" nokeepalive ssl-unclean-shutdown downgrade-1.0 force-response-1.0 CustomLog "logs/ssl_request.log" "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b" </VirtualHost> If I invert these blocks, then the opposite happens: local.domain.com goes to lmarket.local.domain.com. Any help would be appreciated.

    Read the article

  • Nginx + Haproxy + Thin + Rails - 503 Service Unavailable -

    - by Luca G. Soave
    I don't know how troubleshoot this. I get "503 Service Unavailable" http error for all "nginx upstreams" proxy passing calls to haproxy fast_thin and slow_thin ( server 127.0.0.1:3100 and server 127.0.0.1:3200 ), which loadbalance on 6 Thin servers ( 127.0.0.1:3000 .. 3005 ). Static files like /blog are currently fine. The falldown is: nginx on port 80 - haproxy on 3100 and 3200 - thin on 3000 .. 3005 and then Rails. Here it is /etc/nginx/nginx.conf : user nginx; worker_processes 2; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; sendfile on; tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; include /etc/nginx/conf.d/*.conf; } then /etc/nginx/conf.d/default.conf upstream fast_thin { server 127.0.0.1:3100; } upstream slow_thin { server 127.0.0.1:3200; } server { listen 80; server_name www.gitwatcher.com; rewrite ^/(.*) http://gitwatcher.com/$1 permanent; } server { listen 80; server_name gitwatcher.com; access_log /var/www/gitwatcher/log/access.log; error_log /var/www/gitwatcher/log/error.log; root /var/www/gitwatcher/public; # index index.html; location /about { proxy_pass http://fast_thin; break; } location /trends { proxy_pass http://slow_thin; break; } location /categories { proxy_pass http://slow_thin; break; } location /signout { proxy_pass http://slow_thin; break; } location /auth/github { proxy_pass http://slow_thin; break; } location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (-f $request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f $request_filename.html) { rewrite (.*) $1.html break; } if (!-f $request_filename) { proxy_pass http://slow_thin; break; } } } then haproxy config file /etc/haproxy/haproxy.cfg : global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon #debug #quiet nbproc 1 # number of processing cores defaults log global retries 3 maxconn 2000 contimeout 5000 mode http clitimeout 60000 # maximum inactivity time on the client side srvtimeout 30000 # maximum inactivity time on the server side timeout connect 4000 # maximum time to wait for a connection attempt to a server to succeed option httplog option dontlognull option redispatch option httpclose # disable keepalive (HAProxy does not yet support the HTTP keep-alive mode) option abortonclose # enable early dropping of aborted requests from pending queue option httpchk # enable HTTP protocol to check on servers health option forwardfor # enable insert of X-Forwarded-For headers balance roundrobin # each server is used in turns, according to assigned weight stats enable # enable web-stats at /haproxy?stats stats auth haproxy:pr0xystats # force HTTP Auth to view stats stats refresh 5s # refresh rate of stats page listen rails_proxy 127.0.0.1:3100 # - equal weights on all servers # - maxconn will queue requests at HAProxy if limit is reached # - minconn dynamically scales the connection concurrency (bound my maxconn) depending on size of HAProxy queue # - check health every 20000 microseconds server web1 127.0.0.1:3000 weight 1 minconn 3 maxconn 6 check inter 20000 server web1 127.0.0.1:3001 weight 1 minconn 3 maxconn 6 check inter 20000 server web1 127.0.0.1:3002 weight 1 minconn 3 maxconn 6 check inter 20000 listen slow_proxy 127.0.0.1:3200 # cluster for slow requests, lower the queues, check less frequently server slow1 127.0.0.1:3003 weight 1 minconn 1 maxconn 3 check inter 40000 server slow2 127.0.0.1:3004 weight 1 minconn 1 maxconn 3 check inter 40000 server slow3 127.0.0.1:3005 weight 1 minconn 1 maxconn 3 check inter 40000 and the Thin config file /etc/thin/gitwatcher.yml : --- chdir: /var/www/gitwatcher environment: production address: 0.0.0.0 port: 3000 timeout: 30 log: log/thin.log pid: tmp/pids/thin.pid max_conns: 1024 max_persistent_conns: 100 require: [] wait: 30 servers: 6 daemonize: true if I look into open listen ports, I got the following : root@fullness:/var/www/gitwatcher# lsof | grep TCP | egrep "nginx|haproxy|thin" nginx 834 root 8u IPv4 921 0t0 TCP *:http (LISTEN) nginx 835 nginx 8u IPv4 921 0t0 TCP *:http (LISTEN) nginx 837 nginx 8u IPv4 921 0t0 TCP *:http (LISTEN) haproxy 1908 haproxy 4u IPv4 11699 0t0 TCP localhost:3100 (LISTEN) haproxy 1908 haproxy 6u IPv4 11701 0t0 TCP localhost:3200 (LISTEN) root@fullness:/var/www/gitwatcher# iptables -L get me the following : Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:22222 ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT all -- anywhere anywhere DROP all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere Any help ?

    Read the article

  • apache2 mysql authentication module and SHA1 encryption

    - by Luca Rossi
    I found myself in a setup on where I need to enable some authentication method using mysql. I already have an user scheme. That user scheme is working like a charm with MD5 password and CRYPT, but when I turn to SHA1sum it says: [Fri Oct 26 00:03:20 2012] [error] Unsupported encryption type: Sha1sum No useful debug informations on log files. This is my setup and some info: debian6 apache and ssl installed packages: root@sistemichiocciola:/etc/apache2/mods-available# dpkg --list | grep apache ii apache2 2.2.16-6+squeeze8 Apache HTTP Server metapackage ii apache2-mpm-prefork 2.2.16-6+squeeze8 Apache HTTP Server - traditional non-threaded model ii apache2-utils 2.2.16-6+squeeze8 utility programs for webservers ii apache2.2-bin 2.2.16-6+squeeze8 Apache HTTP Server common binary files ii apache2.2-common 2.2.16-6+squeeze8 Apache HTTP Server common files ii libapache2-mod-auth-mysql 4.3.9-13+b1 Apache 2 module for MySQL authentication ii libapache2-mod-php5 5.3.3-7+squeeze14 server-side, HTML-embedded scripting language (Apache 2 module) root@sistemichiocciola:/etc/apache2/sites-enabled# dpkg --list | grep ssl ii libssl-dev 0.9.8o-4squeeze13 SSL development libraries, header files and documentation ii libssl0.9.8 0.9.8o-4squeeze13 SSL shared libraries ii openssl 0.9.8o-4squeeze13 Secure Socket Layer (SSL) binary and related cryptographic tools ii openssl-blacklist 0.5-2 list of blacklisted OpenSSL RSA keys ii ssl-cert 1.0.28 simple debconf wrapper for OpenSSL my vhost setup: AuthMySQL On Auth_MySQL_Host localhost Auth_MySQL_User XXX Auth_MySQL_Password YYY Auth_MySQL_DB users AuthName "Sistemi Chiocciola Sezione Informatica" AuthType Basic # require valid-user require group informatica Auth_MySQL_Encryption_Types Crypt Sha1sum AuthBasicAuthoritative Off AuthUserFile /dev/null Auth_MySQL_Password_Table users Auth_MYSQL_username_field email Auth_MYSQL_password_field password AuthMySQL_Empty_Passwords Off AuthMySQL_Group_Table http_groups Auth_MySQL_Group_Field user_group Have I missed a package/configuration or something?

    Read the article

  • Local DNS server (bind) and the router DHCP

    - by Luca
    I just set up an internal http server for internal use (I set up Redmine), in a small network (30 or so PCs). I set up the http server on a virtual box ubuntu, that runs also the DNS server (bind). In the DNS lookup I added the Redmine server name (redmine.engserver <- 192.168.1.14) and as forwarders the outside ISP DNS IP adresses. I am using a small wi-fi router (ASUS RT-N66U) as DHCP (and as gateway). In the DHCP config page I set up as DNS the ubuntu server IP (it is fixed 192.168.1.14). Now when I connect a new PC to the network, the DHCP router issues its new IP and as DNS servers it issues: primary: 192.168.1.14 (ubuntu machine) and seconary 192.168.1.1 (the router itself). ipconfig /all Default Gateway . . . . . . . . . : 192.168.1.1 DHCP Server . . . . . . . . . . . : 192.168.1.1 DHCPv6 IAID . . . . . . . . . . . : 248539109 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-17-15-AA-3F-D0-67-E5-49-A7-EF DNS Servers . . . . . . . . . . . : 192.168.1.14 192.168.1.1 NetBIOS over Tcpip. . . . . . . . : Enabled Before changing the DHCP setting on the router, I would always get only one DNS server: 192.168.1.1 (which uses probably DNS forwarding to external public DNS services). The problem is this: If in my browser I type www.google.com, it works all the time. If in the browser I type http://redmine.engserver/ it works most of the time, but sometimes it ends up with a yahoo page search or something else. In the DNS cache it shows as (Server not found). ipconfig /displaydns I looked with wireshark and it seems like sometimes the client PC interrogates the secondary DNS (192.168.1.1) instead of the first 192.168.1.14. Obviously this one is a public domain and it does not have the redmine.engserver entry. What is wrong in this configuration? Is it even legitimate to have 2 DNS (one internal and one forwarded by the router) which are inconsistent? Is there another way to have a local name service in a small office network? Why is the router DHCP issuing itself as DNS?

    Read the article

  • Configuring Postfix with other SMTP provider

    - by Eduard Luca
    I want to use SendGrid as my email sending service, but want to also use Postfix's internal queue mechanism to manage the emails sent through Sendgrid. So basically what I want to do is to configure Postfix to send emails through Sendgrid's SMTP, and I will configure my app to send the emails using the local Postfix. My question is, how can I configure Postfix to use an external SMTP? Looked here but didn't see anything useful.

    Read the article

  • *Simple* way to block DDoS by number of requests

    - by Eduard Luca
    I have 3 Varnish 3.0.2 servers with Apache 2 as backends, which are being load balanced through a HAproxy separate server. I need to find a very simple program (I'm not much of a sysadmin), which blocks requests from an IP, if that IP has made more than X requests in Y seconds. Would something like this be achievable with a simple solution? Right now I have to block all requests manually with iptables.

    Read the article

  • Configure Supervisor to manage init.d services

    - by Eduard Luca
    I installed uwsgi and created a bash script, which allows me to start/stop uwsgi in the following manner: service uwsgi [start|stop]. This bash script is located in /etc/init.d/uwsgi. Now, I want to (politely) ask Supervisor to use that script to manage the uwsgi process. All the tutorials indicate that this is not the way to do it, however I do want to be able to do both service uwsgi stop and supervisorctl stop uwsgi (not sure if I nailed the syntax of the latter) -- even though I am aware that the first one will not in fact stop my service because supervisor will restart it (that's exactly what I need). Note that I'm using uwsgi in emperor mode if that matters in any way.

    Read the article

  • how can i realize a video-wall on 3-9pc with vlc

    - by Luca
    hello! i have to create a videowall, from 3 to 9 monitor. every monitor as a pc. actually, i stream from a server 9 movies with different istances of VLC, but i could also play on every machine the relative video with a single player. there's no problem. the real problem is that i really dont know how to sync the videos on a LAN...unfortunately there is a NETSYNC module inside VLC wich is NOT working. here are some info about my setup: videowall from 3 to 9 monitor || from 3 to 9 pc, all with the same configuration || a gigabit router+switch for the "dedicated" LAN im really stuck in this situation, if anyone has an idea or just a completely different solution, please, share it with me! thanks a lot in advance! :)

    Read the article

  • FTP - 530 Sorry, the maximum number of clients…?

    - by Luca Filosofi
    Hi All! My problem is that my FTP work great, exept when i upload files on a particular client server! on this server happen that some files are uploaded fine and others not, they stop while uploading at half of it's size, then this error is displayed: 530 Sorry, the maximum number of clients (4) from your host are already connected. Unable to make a connection. Please try again. Obviously this is not true, i'm the only one that is uploading! Anyone had the same experience with this!? PS: i have tried many different FTP, all display the same error or just hung up! Thank's

    Read the article

  • vmWare virtual image compatibility

    - by luca
    If I create a vmware virtual image on my mac (with vmware Fusion 2) it creates a file line "Ubuntu.vmwarevm" My question (which I couldn't get answered by vmware Support..) is: Will this file be compatible with a ESXi 4.0 server? In general, virtual machine for vmware are all the same format? thanks

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10  | Next Page >