dear sir,
please help me i have reliance bandwidth connection & i want to set local area network without use a router please help how set this
with regards
ritesh rana
[email protected]
Hi everyone,
I am running a Windows Server 2003 OS and am noticing that no one is able to connect to the machine through Remote Desktop. I have gone through the Terminal Services Configuration to make sure that we had the RDP-Tcp connection enabled and I've checked to see that the server was listening to port 3389. Are there any other options since I've tried to ping into our host server with no results.
Thanks in advance.
The title pretty much says everything: I'm using server-side filtering so I need Thunderbird to watch all folders for new messages using IDLE.
I've already tried enabling "When getting new messages for this account, always check this folder" but after restarting TB I did not have an IMAP IDLE connection open for every folder.
I've also tried setting mail.check_all_imap_folders_for_new (which did not exist before) and mail.server.default.check_all_folders_for_new to true - nothing changed.
I've been trying to get some information about the MAPI plugin for Evolution - but it all seems to in pieces everywhere, and also mostly a couple of years old.
Anyone had any experience with getting Evolution connecting to MS Exchange via MAPI? Unfortunately, any other connection method (IMAP or WebDAV) is not an option - either because of not being allowed or just unusable.
I'm trying to use memcached from a different machine (which has access to my server), but I can't figure out how.
on the memcached machine I can test the connection by running
telnet 127.0.0.1 port
And it works, but on the other machine it just keeps trying to connect
telnet machine_address port
Trying machine_address...
I'm not sure if I should set up something else to get it working. I know the port is working and accessible because if I try to run other services on it, they works.
Os is ubuntu
Getting error:
Login failed for user '(null)'. Reason: Not associated with a trusted SQL Server connection.
Restarted the service, server, and all the computers in between. I suspect it's not connecting to the domain server - any way to check about this?
I have a fully working Joomla 2.5.6 install driven by a local MySQL server, but I'd like to test nginx to see if it's a faster web serving experience than Apache.
\ PHP 5.4.6 (PHP54w)
\ CentOS 6.2
\ Joomla 2.5.6
\ PHP54w-fpm.i386 (FastCGI process manager)
\ php -m shows: mysql & mysqli modules loaded
Nginx seems to have installed fine via yum, it can process a PHP-info file via FastCGI perfectly OK (http://37.128.190.241/php.php) but when I stop Apache, start nginx instead and visit my site I get: "Database connection error (1): The MySQL adapter 'mysqli' is not available."
I've tried adjusting my Joomla configuration.php to use mysql instead of mysqli but I get the same basic error, only this time "Database connection error (1): The MySQL adapter 'mysql' is not available" of course!
Can anyone think what the problem might be please? I did try explicitly setting extension = mysqli.so and extension = mysql.so in my php.ini to try and force the issue (despite php -m showing they were both successfully loaded anyway) - no difference.
I have a pretty standard nginx default.conf:
server {
listen 80;
server_name www.MYDOMAIN.com;
server_name_in_redirect off;
access_log /var/log/nginx/localhost.access_log main;
error_log /var/log/nginx/localhost.error_log info;
root /var/www/html/MYROOT_DIR;
index index.php index.html index.htm default.html default.htm;
# Support Clean (aka Search Engine Friendly) URLs
location / {
try_files $uri $uri/ /index.php?q=$uri&$args;
}
# deny running scripts inside writable directories
location ~* /(images|cache|media|logs|tmp)/.*\.(php|pl|py|jsp|asp|sh|cgi)$ {
return 403;
error_page 403 /403_error.html;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include /etc/nginx/fastcgi.conf;
}
# caching of files
location ~* \.(ico|pdf|flv)$ {
expires 1y;
}
location ~* \.(js|css|png|jpg|jpeg|gif|swf|xml|txt)$ {
expires 14d;
}
}
Snip of output from phpinfo under nginx:
Server API FPM/FastCGI
Virtual Directory Support disabled
Configuration File (php.ini) Path /etc
Loaded Configuration File /etc/php.ini
Scan this dir for additional .ini files /etc/php.d
Additional .ini files parsed /etc/php.d/curl.ini, /etc/php.d/fileinfo.ini, /etc/php.d/json.ini, /etc/php.d/phar.ini, /etc/php.d/zip.ini
Snip of output from phpinfo under Apache:
Server API Apache 2.0 Handler
Virtual Directory Support disabled
Configuration File (php.ini) Path /etc
Loaded Configuration File /etc/php.ini
Scan this dir for additional .ini files /etc/php.d
Additional .ini files parsed /etc/php.d/curl.ini, /etc/php.d/fileinfo.ini, /etc/php.d/json.ini, /etc/php.d/mysql.ini, /etc/php.d/mysqli.ini, /etc/php.d/pdo.ini, /etc/php.d/pdo_mysql.ini, /etc/php.d/pdo_sqlite.ini, /etc/php.d/phar.ini, /etc/php.d/sqlite3.ini, /etc/php.d/zip.ini
Seems that with Apache, PHP is loading substantially more additional .ini files, including ones relating to mysql (mysql.ini, mysqli.ini, pdo_mysql.ini) than nginx.
Any ideas how I get nginix to also call these additional .ini's ?
Thanks in advance,
Steve
I'm using Ubuntu 12.04 and I would like to use apt-get to download a package and all of it's depenedcies.
Those packages will have to be installed on computers with no internet connection, so in addition to the base package I also need to all of the package's dependcies as well.
Is there an easy way to do this (like in muon package manager)?
I now that I can use the apt-get download command for this, but I don't want to manually specify each package that muon recommends to install or upgrade.
I have still have dial-up(cant afford highspeed) and I am trying to configure my desktop with vista to share that connection with Ubuntu Ive got it all setup but ubuntu wont get on the internet at all. Can yall help me with this problem.
I've installed ubuntu 10.04, hoping to get rid of the 9.10 errors. (Nvidia driver didn't work quite properly, had some problems with clicking in flash, and I had -and still have- wired network working, but ubuntu said that it's disconnected)
So the Nvidia driver is better now, but the network error is the same. I still have a working wired network connection, but when it starts says that it is disconnected. Notification area tells me the same thing.
Why is that?
Using the Launchpad from the Mac, trying to connect to Server Essentials 2011, no one is able to actually log in. The little login wheel keeps spinning and spinning. Anyone have any ideas what might be keeping us from logging in? On the mac side, we are using 10.6 and 10.7, but both are exhibiting the same problem. I also cannot connect to the server by remote desktop connection within the local network.
HELP!
In Windows there appear to be two ways to set up IPsec:
The IP Security Policy Management MMC snap-in (part of secpol.msc, introduced in Windows 2000).
The Windows Firewall with Advanced Security MMC snap-in (wf.msc, introduced in Windows 2008/Vista).
My question concerns #2 – I already figured out what I need to know for #1. (But I want to use the ‘new’ snap-in for its improved encryption capabilities.)
I have two Windows Server 2008 R2 computers in the same domain (domain members), on the same subnet:
server2 172.16.11.20
server3 172.16.11.30
My goal is to encrypt all communication between these two machines using IPsec in tunnel mode, so that the protocol stack is:
IP
ESP
IP
…etc.
First, on each computer, I created a Connection Security Rule:
Endpoint 1: (local IP address), eg 172.16.11.20 for server2
Endpoint 2: (remote IP address), eg 172.16.11.30
Protocol: Any
Authentication: Require inbound and outbound, Computer (Kerberos V5)
IPsec tunnel:
Exempt IPsec protected connections
Local tunnel endpoint: Any
Remote tunnel endpoint: (remote IP address), eg 172.16.11.30
At this point, I can ping each machine, and Wireshark shows me the protocol stack; however, nothing is encrypted (which is expected at this point). I know that it's unencrypted because Wireshark can decode it (using the setting Attempt to detect/decode NULL encrypted ESP payloads) and the Monitor Security Associations Quick Mode display shows ESP Encryption: None.
Then on each server, I created Inbound and Outbound Rules:
Protocol: Any
Local IP addresses: (local IP address), eg 172.16.11.20
Remote IP addresses: (remote IP address), eg 172.16.11.30
Action: Allow the connection if it is secure
Require the connections to be encrypted
The problem: Though I create the Inbound and Outbound Rules on each server to enable encryption, the data is still going over the wire (wrapped in ESP) with NULL encryption. (You can see this in Wireshark.)
When the arrives at the receiving end, it's rejected (presumably because it's unencrypted). [And, disabling the Inbound rule on the receiving end causes it to lock up and/or bluescreen – fun!] The Windows Firewall log says, eg:
2014-05-30 22:26:28 DROP ICMP 172.16.11.20 172.16.11.30 - - 60 - - - - 8 0 - RECEIVE
I've tried varying a few things:
In the Rules, setting the local IP address to Any
Toggling the Exempt IPsec protected connections setting
Disabling rules (eg disabling one or both sets of Inbound or Outbound rules)
Changing the protocol (eg to just TCP)
But realistically there aren't that many knobs to turn.
Does anyone have any ideas? Has anyone tried to set up tunnel mode between two hosts using Windows Firewall?
I've successfully got it set up in transport mode (ie no tunnel) using exactly the same set of rules, so I'm a bit surprised that it didn't Just Work™ with the tunnel added.
I connect to a remote computer using mstsc on a slow connection. Now I have set the settings in the experience tab to the lowest. I was wondering if reducing the color depth will improve my experience any further
this is by far one of the strangest things I have seen. I have a win 2008R2 cluster with a CSV. the CSV itself is on an iSCSI storage (hitachi HUS 110)
basic config of the two hosts in the cluster is
Dell R610
Win 2008 R2 with all patches
64GB
1 NIC for host access
2 NICs for guest access
2 NICs for iSCSI
these machine work great and I can load a 2008R2 test guest machine on them in less than 90 seconds
after the above config is running for over a year, I now need to add a new host.
now the host is
Dell R620 (Still intel but different CPU)
Win 2008 R2 with all patches
64GB
1 NIC for host access
2 NICs for guest access
2 NICs for iSCSI
I added this new host to the domain and to the cluster, I gave it access to the CSV
and I tried loading the same guest machine that loads in 90 seconds in the other hosts. the machine loads in about 6 minutes. no matter how many times I try this the old hosts load the machine in about 90 seconds and this new host in around 6 minutes
to eliminate any problems with the iSCSI connection, I added a new LUN and directly accessed it from the new host and I was working at around 300MB/s so no problem there.
I also tested the connection between the other hosts and the new one and network is working fine there too.
to eliminate problems in HyperV, I copied the machine to the local disk of the new host and it loaded in less than 20 seconds.
now is the point were things get a lot stranger:
in my tests I tried installing a fresh windows guest machine to the CSV from the new host. I noticed that while the fresh windows was installing, my test guest was loading in less than 90 seconds even on the new host (I repeated this a few times). If I paused the fresh install guest and tried loading the test guest again it loaded in 6 minutes. and again after I resumed the guest installation the test guest loaded fast.
after the fresh windows was also loaded, I ran tests loading the fresh window and my test machine. each one of them loaded in about 5 minutes when I tried loading them separately. however when I started both of them in the same time they both loaded in around 2.5 minutes
it seems that the iSCSI disk access is only working if it is under some load (although I never got to above 10% utilization according to the task manager)
does anyone have any idea what could be the problem?
Hi
at work, due to our network configuration, i cannot ssh external servers. We are on a Windows environment. I need to ssh a server of mine, but i can only exit from our LAN via port 88.
How could I use my home MacOs box to accept an http connection from my home computer and route it via ssh to the server i need to' connect to?
Thanks.
This started happening when I installed Windows 7. I've tried with Filezilla and with the FireFTP Firefox plugin, and I could never connect. Filezilla gave the error message "ECONNREFUSED - Connection refused by server". I tried disabling the Windows Firewall, but no luck. Any ideas on what might be causing this?
I just discovered that I can connect with web services like net2ftp, but not with FTP clients.
Recently we have have installed Windows Server 2008 R2 on one of our development boxes at work. We have 10 Client Access Licence's for Microsoft Windows Terminal Server 2008. I'm under the impression that these licences will entitle us to have 10 concurrent connections to Remote Desktop. At the moment we are only allowed two.
Can we have a RD connection per CAL? If so - how do we configure this?
Thanks!
I connect to a remote machine from the local machine and use virsh console to enter the virtual machine. I don't know how to exactly depict how it works. Normally it works well, but when I run VIM:
Then it can't be recovered unless I cut the connection up:
It's very hard to work with a broken terminal. Any advice? My terminal works well on my local machine and the remote machine in which the virtual machine runs.
Hi, I'm looking for a way to minimize the net traffic use with my netbook mobile internet connection. Recently I managed to install Opera Mini on the XP and the opera approach of compressing the data helped a lot. But I would like to do the same with my favorite browser using http proxy that compress the data "on the fly". But searching for "compression proxy servers" I could not find any working host/port links. Is it a brand-new technology and therefore expensive or rarely available?
Hi,
I want to know how can I share my network using VPN connection
I have windows server 2003, I tried to install vpn, and when I connect to the server I only get the local server, and I want to get to all my network
how can I do?
thanks
I am using mvn buildNumber plugin to generate build no with latest svn revision no. But, our version is not resolve to ${buildNumber} in the duration of installing in .m2 local reposotry.
here is the our pom details:
<modelVersion>4.0.0</modelVersion>
<groupId>com.hp.cloudprint</groupId>
<artifactId>testutils</artifactId>
<name>testutils</name>
<version>6.3.rel.${buildNumber}</version>
<description>This jar contains some helper classes which can simplify the writing of JUnit test cases.</description>
<dependencies>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>buildnumber-maven-plugin</artifactId>
<executions>
<execution>
<id>useLastCommittedRevision</id>
<phase>validate</phase>
<goals>
<goal>create</goal>
</goals>
</execution>
</executions>
<configuration>
<doCheck>false</doCheck>
<doUpdate>true</doUpdate>
<getRevisionOnlyOnce>true</getRevisionOnlyOnce>
</configuration>
</plugin>
<scm>
<connection>scm:svn:https://acn-platform</connection>
<developerConnection>scm:svn:https://abc-platform/trunk</developerConnection>
</scm>.
</project>
Building jar: C:\Documents and Settings\hpadmin\workspace\testutils\target\testutils-6.3.rel.2930.jar
[INFO] [install:install]
[INFO] Installing C:\Documents and Settings\hpadmin\workspace\testutils\target\testutils-6.3.rel.2930.jar to C:\Documents and Settings\jhab.m2*\repository\com\hp\cloudprint\testutils\6.3.rel.${buildNumber}\testutils-6.3.rel.${buildNumber}.jar**
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESSFUL
Target generated correct jar. testutil-6.3.rel.2297.jar*
Thanks in advance
Binit
I've been using itunes to stream music over lan, and quite frankly the quality is terrible. On the other hand, its ease of use - having a simple client on either end, and it just 'working' with full control on my end is nice- the quality however is terrible - I'm wondering if there's any controllable streaming remote music serve for windows that will let me pick the bitrate it sends at, or uses whatever bitrate the original file is in, and just sends it, and has itself controllable at the client end, and will work acceptably over a wlan connection
We have been running nginx - uWSGI, and now we are evaluating putting
Varnish as a caching layer between nginx and uWSGI (similar to
http://www.heroku.com/how/architecture).
But, nginx only supports HTTP 1.0 on the back so it will have to create new connections with Varnish for each request.
Many recommend running nginx in front of Varnish, but wouldn't it make much more sense to use something like Cherokee so that you eliminate the HTTP connection overhead since it supports HTTP 1.1 on the back?
I've been using an SSH proxy to my home network to encrypt my internet surfing, which is fine. But the connection is much slower than the direct one, and when I'm downloading large files I'd rather go around the proxy. Currently, I send it to Downthemall, go to FoxyProxy and disable the proxy, cancel and resume the download, then when it's started go back to FoxyProxy and re-enable it. Is there any way I can just get DownThemAll stuff to skip the foxyproxy?
I want to buy a USB powered GPS module together with a mapping software. Any recommendations and various options that I have ?
Much like Garmin's MapSource software and GPS 18 USB:
nRoute features an easy-to-use
interface, making it intuitive to
operate so you can focus on driving.
It offers auto-routing and
voice-prompting capabilities to
virtually any address. The GPS 18 USB
includes a 12 parallel channel,
WAAS-enabled sensor with USB
connection.