Search Results

Search found 4304 results on 173 pages for 'bytes'.

Page 109/173 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Nagios3 gives a warning on HTTP service monitoring

    - by Dez
    Already set up my local net configuration to be monitored by Nagios3. I found a problem that Nagios3 reports a warning in the HTTP monitoring service of a Debian server set at ip 192.168.1.52, that has an individual virtual host and a mass virtual host for application development. I get this status message: HTTP WARNING: HTTP/1.1 404 Not Found I used the Nagios tools to check. servername is the name of the vhost server name I used in the Apache configuration. /usr/lib/nagios/plugins/check_http -H servername -I 192.168.1.52 receiving this status message: HTTP OK HTTP/1.1 200 OK - 37900 bytes in 0.504 seconds |time=0.503946s;;;0.000000 size=37900B;;;0 But when I check like this: /usr/lib/nagios/plugins/check_http -I 192.168.1.52 I get the same status message as the warning, so I assume that I don't have Nagios completely well set up because doesn't recognize the vhosts for that server, how it should be as the check_http service shows. Where should I look to fix that warning?

    Read the article

  • Can a file change size when the transfer protocol changes?

    - by djechelon
    I am very curious about what I have just found happening on my computers. I have set up SyncBackPro to synchronize a music folder from my home desktop to my laptop using Windows network share (SMB). Files get synchronized regularly. Now I tried to switch to FTP and I noticed that NO FILE matches its counterpart even if they have never been modified (I make sure there is the readonly flag and no application is allowed to retag MP3s and whatever...), so SyncBack asks me what side should overwrite the other. FTP files are a little larger than local files. I run synchronization from the laptop. How can such a thing happen? Files are the same, bytes should be the same... If I run SMB sync again it matches all the files again.

    Read the article

  • Serial port : Read data problem, not reading complete data

    - by Anuj Mehta
    Hi I have an application where I am sending data via serial port from PC1 (Java App) and reading that data in PC2 (C++ App). The problem that I am facing is that my PC2 (C++ App) is not able to read complete data sent by PC1 i.e. from my PC1 I am sending 190 bytes but PC2 is able to read close to 140 bytes though I am trying to read in a loop. Below is code snippet of my C++ App Open the connection to serial port serialfd = open( serialPortName.c_str(), O_RDWR | O_NOCTTY | O_NDELAY); if (serialfd == -1) { /* * Could not open the port. */ TRACE << "Unable to open port: " << serialPortName << endl; } else { TRACE << "Connected to serial port: " << serialPortName << endl; fcntl(serialfd, F_SETFL, 0); } Configure the Serial Port parameters struct termios options; /* * Get the current options for the port... */ tcgetattr(serialfd, &options); /* * Set the baud rates to 9600... */ cfsetispeed(&options, B38400); cfsetospeed(&options, B38400); /* * 8N1 * Data bits - 8 * Parity - None * Stop bits - 1 */ options.c_cflag &= ~PARENB; options.c_cflag &= ~CSTOPB; options.c_cflag &= ~CSIZE; options.c_cflag |= CS8; /* * Enable hardware flow control */ options.c_cflag |= CRTSCTS; /* * Enable the receiver and set local mode... */ options.c_cflag |= (CLOCAL | CREAD); // Flush the earlier data tcflush(serialfd, TCIFLUSH); /* * Set the new options for the port... */ tcsetattr(serialfd, TCSANOW, &options); Now I am reading data const int MAXDATASIZE = 512; std::vector<char> m_vRequestBuf; char buffer[MAXDATASIZE]; int totalBytes = 0; fcntl(serialfd, F_SETFL, FNDELAY); while(1) { bytesRead = read(serialfd, &buffer, MAXDATASIZE); if(bytesRead == -1) { //Sleep for some time and read again usleep(900000); } else { totalBytes += bytesRead; //Add data read to vector for(int i =0; i < bytesRead; i++) { m_vRequestBuf.push_back(buffer[i]); } int newBytesRead = 0; //Now keep trying to read more data while(newBytesRead != -1) { //clear contents of buffer memset((void*)&buffer, 0, sizeof(char) * MAXDATASIZE); newBytesRead = read(serialfd, &buffer, MAXDATASIZE); totalBytes += newBytesRead; for(int j = 0; j < newBytesRead; j++) { m_vRequestBuf.push_back(buffer[j]); } }//inner while break; } //while

    Read the article

  • Reading the *.sqlite files stored in the firefox profile.

    - by celil
    How would one go about reading the data in the *.sqlite files in one's profile? Trying to read them with sqlite3 was unsuccessful. $ sqlite3 SQLite version 3.7.4 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> .load ./downloads.sqlite Error: dlopen(./downloads.sqlite, 10): no suitable image found. Did find: ./downloads.sqlite: unknown file type, first eight bytes: 0x53 0x51 0x4C 0x69 0x74 0x65 0x20 0x66 sqlite> .exit Is this due to Firefox using an older version of sqlite? I am using Firefox v3.6.13

    Read the article

  • Why does a zip file appear larger than the source file especially when it is text?

    - by PeanutsMonkey
    I have a text file that is 19 bytes in size and having compressed the file using zip and 7zip, it appears to be larger. I had a read of the question on Why is a 7zipped file larger than the raw file? as well as Why doesn't ZIP Compression compress anything? but considering the file is not already compressed I would have expected further compression. Attached is a screenshot. EDIT0 I took the example further by creating a file that contained random data as follows dd if=/dev/urandom of=sample.log bs=1G count=1 and attempted to compress the file using both zip and 7zip however there were no compression gains. Why is that?

    Read the article

  • Browsers ignoring hosts file

    - by madkris
    Until recently my browsers started to ignore my hosts file. I have Windows 7 operating system installed. 192.168.0.5 livesite.com I have tried: Clearing browser cache Issued "ipconfig /flushdns" from the command line Issued "ping livesite.com" from the command line (response was "Reply from 192.168.0.5: bytes=32 time=1ms TTL=128") Restarting unit Backing up original hosts file and making a new one Checking lmhosts.sam (everything is commented out) Connecting directly to modem using cable Checked \HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\DataBasePath Tried it on another laptop with exactly the specs as I have Then I tried Changing entry to "127.0.0.1 livesite.com" (ping ok, browser ok) Changing entry to "192.168.0.5 livesite.com" (ping ok, browser ok but only for a sec) Issued "ipconfig /flushdns" from the command line (ping ok, browser not ok) Changing entry to "127.0.0.1 livesite.com" (ping ok, browser ok) Changing entry to "192.168.0.5 livesite.com" (ping ok, browser not ok) Issued "ipconfig /flushdns" from the command line (ping ok, browser not ok) Any idea why it worked for a moment? Or better yet anything I havent tried or some error I may have overlooked?

    Read the article

  • Browsers ignoring hosts file

    - by madkris
    Until recently my browsers started to ignore my hosts file. I have Windows 7 operating system installed. 192.168.0.5 livesite.com I have tried: Clearing browser cache Issued "ipconfig /flushdns" from the command line Issued "ping livesite.com" from the command line (response was "Reply from 192.168.0.5: bytes=32 time=1ms TTL=128") Restarting unit Backing up original hosts file and making a new one Checking lmhosts.sam (everything is commented out) Connecting directly to modem using cable Checked \HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\DataBasePath Tried it on another laptop with exactly the specs as I have Then I tried Changing entry to "127.0.0.1 livesite.com" (ping ok, browser ok) Changing entry to "192.168.0.5 livesite.com" (ping ok, browser ok but only for a sec) Issued "ipconfig /flushdns" from the command line (ping ok, browser not ok) Changing entry to "127.0.0.1 livesite.com" (ping ok, browser ok) Changing entry to "192.168.0.5 livesite.com" (ping ok, browser not ok) Issued "ipconfig /flushdns" from the command line (ping ok, browser not ok) Any idea why it worked for a moment? Or better yet anything I havent tried or some error I may have overlooked?

    Read the article

  • Install problems with XSendFile on Ubuntu

    - by Dan
    I installed the apache dev headers: sudo apt-get install apache2-prefork-dev Downloaded and compiled the module as outlined here: http://tn123.ath.cx/mod_xsendfile/ Added the following line to /etc/apache2/mods-available/xsendfile.load: LoadModule xsendfile_module /usr/lib/apache2/modules/mod_xsendfile.so Added this to my VirtualHost: <VirtualHost *:80> XSendFile on XSendFilePath /path/to/protected/files/ Enabled the module by doing: sudo a2enmod xsendfile Then I restarted Apache. Then this code still just provides me with an empty file with 0 bytes: file_path = '/path/to/protected/files/some_file.zip' file_name = 'some_file.zip' response = HttpResponse('', mimetype='application/zip') response['Content-Disposition'] = 'attachment; filename=%s' % smart_str(file_name) response['X-Sendfile'] = smart_str(file_path) return response And there is not in the Apache error log that pertains to XSendFile. What am I doing wrong?

    Read the article

  • Extract number with regex

    - by Joey
    I have this string: > HTTP/1.1 200 OK Date: Tue, 12 Nov 2013 15:26:17 GMT Server: > Apache/2.2.3 (CentOS) Last-Modified: Fri, 08 Nov 2013 21:34:50 GMT > ETag: "452//path/to/file" > Accept-Ranges: bytes Content-Length: 26010 Connection: close > Content-Type: text/plain; charset=UTF-8 And would like to extract 452 which is before // and after ETag, what regex to use? I am stuck. Thanks a lot

    Read the article

  • How can I log all traffic with its exact length?

    - by Legate
    I want to process all packets with their size going through our gateway server (running Debian 4.0). My idea is to use tcpdump, but I have two questions. The command I'm currently thinking of is tcpdump -i iface -n -t -q. Is it guaranteed that tcpdump will process all packets? What happens if the CPU is working to full capacity? The format of the output lines is IP ddd.ddd.ddd.ddd.port > ddd.ddd.ddd.ddd.port: tcp 1260. What exactly is 1260? I have the suspicion that it is the payload in bytes of the packet, which would be exactly what I need, but I'm not sure. It might be the TCP Window Size. Or perhaps there is an even better way of doing this? I thought about a LOG rule in iptables, but tcpdump seems easier and I don't know whether iptables can log the packet lengths.

    Read the article

  • Apache on Win32: Slow Transfers of single, static files in HTTP, fast in HTTPS

    - by Michael Lackner
    I have a weird problem with Apache 2.2.15 on Windows 2000 Server SP4. Basically, I am trying to serve larger static files, images, videos etc. The download seems to be capped at around 550kB/s even over 100Mbit LAN. I tried other protocols (FTP/FTPS/FTP+ES/SCP/SMB), and they are all in the multi-megabyte range. The strangest thing is that, when using Apache with HTTPS instead of HTTP, it serves very fast, around 2.7MByte/s! I also tried the AnalogX SimpleWWW server just to test the plain HTTP speed of it, and it gave me a healthy 3.3Mbyte/s. I am at a total loss here. I searched the web, and tried to change the following Apache configuration directives in httpd.conf, one at a time, mostly to no avail at all: SendBufferSize 1048576 #(tried multiples of that too, up to 100Mbytes) EnableSendfile Off #(minor performance boost) EnableMMAP Off Win32DisableAcceptEx HostnameLookups Off #(default) I also tried to tune the following registry parameters, setting their values to 4194304 in decimal (they are REG_DWORD), and rebooting afterwards: HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultReceiveWindow HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultSendWindow Additionally, I tried to install mod_bw, which sets the event timer precision to 1ms, and allows for bandwidth throttling. According to some people it boosts static file serving performance when set to unlimited bandwidth for everybody. Unfortunately, it did nothing for me. So: AnalogX HTTP: 3300kB/s Gene6 FTPD, plain: 3500kB/s Gene6 FTPD, Implicit and Explicit SSL, AES256 Cipher: 1800-2000kB/s freeSSHD: 1100kB/s SMB shared folder: about 3000kB/s Apache HTTP, plain: 550kB/s Apache HTTPS: 2700kB/s Clients that were used in the bandwidth testing: Internet Explorer 8 (HTTP, HTTPS) Firefox 8 (HTTP, HTTPS) Chrome 13 (HTTP, HTTPS) Opera 11.60 (HTTP, HTTPS) wget under CygWin (HTTP, HTTPS) FileZilla (FTP, FTPS, FTP+ES, SFTP) Windows Explorer (SMB) Generally, transfer speeds are not too high, but that's because the server machine is an old quad Pentium Pro 200MHz machine with 2GB RAM. However, I would like Apache to serve at at least 2Mbyte/s instead of 550kB/s, and that already works with HTTPS easily, so I fail to see why plain HTTP is so crippled. I am using a Kerio Winroute Firewall, but no Throttling and no special filters peeking into HTTP traffic, just the plain Firewall functionality for blocking/allowing connections. The Apache error.log (Loglevel info) shows no warnings, no errors. Also nothing strange to be seen in access.log. I have already stripped down my httpd.conf to the bare minimum just to make sure nothing is interfering, but that didn't help either. If you have any idea, help would be greatly appreciated, since I am totally out of ideas! Thanks! Edit: I have now tried a newer Apache 2.2.21 to see if it makes any difference. However, the behaviour is exactly the same. Edit 2: KM01 has requested a sniff on the HTTP headers, so here comes the LiveHTTPHeaders output (an extension to Firefox). The Output is generated on downloading a single file called "elephantsdream_source.264", which is an H.264/AVC elementary video stream under an Open Source license. I have taken the freedom to edit the URL, removing folders and changing the actual servers domain name to www.mydomain.com. Here it is: LiveHTTPHeaders, Plain HTTP: http://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:55:16 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain LiveHTTPHeaders, HTTPS: https://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:56:57 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain

    Read the article

  • Interpreting Munin graphs showing available entropy and MySQL slow queries in sync

    - by user64204
    We're experiencing performance issues on our website, and after reviewing our munin graphs, the only metrics we've found in sync are Available entropy and MySQL slow queries, with the latter influenced by our number of logged in users: Based on the wikipedia entropy page, my understanding is that entropy is the amount of randomness (here measured in bytes) that the system can use for various tasks, mainly cryptography and functions that require random input. Since the peaks in available entropy and MySQL slow queries are occurring in sync and at regular interval, that the number of MySQL slow queries is proportional to our number of Drupal users whereas the peaks in available entropy seem to be much more constant and less proportional to these 2 metrics, we're thinking available entropy is the reflect of a root cause which, combined with the traffic to our website, is causing those slow queries (and not the opposite, slow queries influencing the entropy). Accordingly: Q: What underlying problem do you think could cause regular peaks in available entropy that could have an influence on MySQL's ability to process queries?

    Read the article

  • zip being too nice (Mac OS X)

    - by stib
    I use zip to do a regular backup of a local directory onto a remote machine. They don't believe in things like rsync here, so it's the best I can do (?). Here's the script I use echo $(date)>>~/backuplog.txt; if [[ -e /Volumes/backup/ ]]; then cd /Volumes/Non-RAID_Storage/; for file in projects/*; do nice -n 10 zip -vru9 /Volumes/backup/nonRaidStorage.backup.zip "$file" 2>&1 | grep -v "zip info: local extra (21 bytes)">>~/backuplog.txt; done; else echo "backup volume not mounted">>~/backuplog.txt; fi This all works fine, except that zip never uses much CPU, so it seems to be taking longer than it should. It never seems to get above 5%. I tried making it nice -20 but that didn't make any difference. Is it just the network or disc speeds bottlenecking the process or am I doing something wrong?

    Read the article

  • How do I compare binary files in Linux?

    - by frustratedCmpNoLongerUser
    I need to compare two binary files and get the output in the form <fileoffset-hex <file1-byte-hex <file2-byte-hex for every different byte. So if file1.bin is 00 90 00 11 in binary form and file2.bin is 00 91 00 10 I want to get something like 00000001 90 91 00000003 11 10 What is the easiest way to accomplish the goal? Standard tool? Some third-party tool? (Note: cmp -l should be killed with fire, it uses a decimal system for offsets and octal for bytes.)

    Read the article

  • Windows/Samba connection error

    - by Gomibushi
    I have a Linux fileserver serving up /home for linux and windows users. I was able to connect from my windows client, but not from a DC. Then suddenly I could connect from the DC too. The linux servers run Centrify clients, and as such are part of the domain. All on same subnet. This is what the the log.smbd says, repeatedly: [2010/02/11 11:25:57, 0] lib/util_sock.c:read_data(534) read_data: read failure for 4 bytes to client 192.168.200.3. Error = Connection reset by peer On Windows it appeared as an "unknown error". EDIT: the error code is "0x80004005". We are developing a system depended on the samba share, and are worried this will appear again. It would be nice to pin point the root of this. Any ideas what this might be? Places to look?

    Read the article

  • How can artificially create a slow query in mysql?

    - by Gray Race
    I'm giving a hands on presentation in a couple weeks. Part of this demo is for basic mysql trouble shooting including use of the slow query log. I've generated a database and installed our app but its a clean database and therefore difficult to generate enough problems. I've tried the following to get queries in the slow query log: Set slow query time to 1 second. Deleted multiple indexes. Stressed the system: stress --cpu 100 --io 100 --vm 2 --vm-bytes 128M --timeout 1m Scripted some basic webpage calls using wget. None of this has generated slow queries. Is there another way of artificially stressing the database to generate problems? I don't have enough skills to write a complex Jmeter or other load generator. I'm hoping perhaps for something built into mysql or another linux trick beyond stress.

    Read the article

  • Measuring cumulative network statistics per user or per process

    - by zsimpson
    I've been googling for hours -- Under Linux I want to know the cumulative bytes sent and received by user or by process over all ip protocols. The best I've found in my searches is that it's possible to use iptables to mark packets for a user, for example: iptables -t mangle -A OUTPUT -p tcp -m owner --uid-owner test -j MARK --set-mark 1 It appears that "tc" can then shape traffic with that but I just want the statistic -- I don't want to shape the traffic. I want something like: "user U has transmitted used XMB since time Y". I can't figure out how to get statistics from these marked packets. Also, I've looked at nethogs but they seem to be measuring the instantaneous flow and I need cumulative counts. Anyone have ideas?

    Read the article

  • Apache Serving 403 Forbidden after OS X Snow Leopard Upgrade to Version 10.6.6

    - by Ian Oxley
    I've just upgraded my MacBook Pro to OS X Snow Leopard version 10.6.6 and now Apache is misbehaving: requests to http://localhost/ generate a 403 Forbidden response -- FIXED requests to any of my virtual hosts seem to generate a 200 Ok response, but contain zero bytes Some further info that might be useful: I'm using the Apache that comes bundled with OS X. I'm using PHP from http://www.entropy.ch/software/macosx/php/ (which is in /usr/local/bin) I've had look at the Apache error log and the only error seems to be the following: [notice] child pid 744 exit signal Segmentation fault (11) I'm completely stumped by this. Any help would be much appreciated. UPDATE Ok, I've managed to resolve the 403 Forbidden error thanks to http://techtrouts.com/mac-os-x-105-web-sharing-forbidden-403-on-httplocalhostusername/ I'm still having the second problem though for any request e.g. this now happens when I request http://localhost

    Read the article

  • My HP-Vista based laptop has become very slow recently

    - by goldenmean
    My HP laptop which has Vista Home premium. When I try to start Firefox, internet explorer, it becomes very slow. No other app. When i checked the Performance in Task Manager. It shows the Physical memory , Free as 0 bytes, almost always. This has been recently. Earlier it didn't used to be zero. Laptop has 2GB of RAM. I have nothing running in my tray except - Sound control, Laptop power plan indicator,Network status indicator. There are no other processes whose memory usage adds up to so high to make Free memory as 0. Then what could be hogging the memory and make the laptop very slow. Any pointers would help as it is crawling at the moment.

    Read the article

  • Prevent lazy loading in nHibernate

    - by Ciaran
    Hi, I'm storing some blobs in my database, so I have a Document table and a DocumentContent table. Document contains a filename, description etc and has a DocumentContent property. I have a Silverlight client, so I don't want to load up and send the DocumentContent to the client unless I explicity ask for it, but I'm having trouble doing this. I've read the blog post by Davy Brion. I have tried placing lazy=false in my config and removing the virtual access modifier but have had no luck with it as yet. Every time I do a Session.Get(id), the DocumentContent is retrieved via an outer join. I only want this property to be populated when I explicity join onto this table and ask for it. Any help is appreciated. My NHibernate mapping is as follows: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="Jrm.Model" namespace="Jrm.Model"> <class name="JrmDocument" lazy="false"> <id name="JrmDocumentID"> <generator class="native" /> </id> <property name="FileName"/> <property name="Description"/> <many-to-one name="DocumentContent" class="JrmDocumentContent" unique="true" column="JrmDocumentContentID" lazy="false"/> </class> <class name="JrmDocumentContent" lazy="false"> <id name="JrmDocumentContentID"> <generator class="native" /> </id> <property name="Content" type="BinaryBlob" lazy="false"> <column name="FileBytes" sql-type="varbinary(max)"/> </property> </class> </hibernate-mapping> and my classes are: [DataContract] public class JrmDocument : ModelBase { private int jrmDocumentID; private JrmDocumentContent documentContent; private long maxFileSize; private string fileName; private string description; public JrmDocument() { } public JrmDocument(string fileName, long maxFileSize) { DocumentContent = new JrmDocumentContent(File.ReadAllBytes(fileName)); FileName = new FileInfo(fileName).Name; } [DataMember] public virtual int JrmDocumentID { get { return jrmDocumentID; } set { jrmDocumentID = value; OnPropertyChanged("JrmDocumentID"); } } [DataMember] public JrmDocumentContent DocumentContent { get { return documentContent; } set { documentContent = value; OnPropertyChanged("DocumentContent"); } } [DataMember] public virtual long MaxFileSize { get { return maxFileSize; } set { maxFileSize = value; OnPropertyChanged("MaxFileSize"); } } [DataMember] public virtual string FileName { get { return fileName; } set { fileName = value; OnPropertyChanged("FileName"); } } [DataMember] public virtual string Description { get { return description; } set { description = value; OnPropertyChanged("Description"); } } } [DataContract] public class JrmDocumentContent : ModelBase { private int jrmDocumentContentID; private byte[] content; public JrmDocumentContent() { } public JrmDocumentContent(byte[] bytes) { Content = bytes; } [DataMember] public int JrmDocumentContentID { get { return jrmDocumentContentID; } set { jrmDocumentContentID = value; OnPropertyChanged("JrmDocumentContentID"); } } [DataMember] public byte[] Content { get { return content; } set { content = value; OnPropertyChanged("Content"); } } }

    Read the article

  • Munin - apache_watch_ plugin, does not show any data in the graphs that shows activity for per vhosts

    - by ovais.tariq
    I am using Munin for monitoring my server. I installed the apache_watch_ plugin so that i could see apache related activity. The graphs for apache_accesses, apache_processes and apache_volume work fine. But the graphs for Apache Documents served, Apache Input/output (bytes), Apache Requests don't show any activity in the graph. The important thing is that these graphs that don't show anything are supposed to show the data divided per vhosts. I am on Ubuntu 10.04.1 LTS (Lucid Lynx) and the munin version is 1.4.4. One thing that I think may be the cause for issue is that I have a separate config file for every vhost, which are included in the main config. But I can't really figure out any solution. Any help will be greatly appreciated!

    Read the article

  • File and Printer Sharing Port issue in Windows 7 64 bit

    - by Mohit
    So, This is totally weird and I could not find any way to fix it. In simple words after the diagnosis this what is happening. On a windows 7 64 bit box, Whenever I am trying to access a server using file share. It is trying to connect to port 80 http on the place of 445. I observed that using WireShark. Telnet to 445 works. Any clue how to fix this. Scanned the machine using Malware bytes, Symantec and MS live essentials . I dont want to recreate the whole machine again. Please help me here. Regards Mohit Thakral

    Read the article

  • Creating a FAT file system and save it into a file in GNU/linux?

    - by RubenT
    I tell you my problem: I want to create a FAT file system and save it into a so I can mount it in linux using something like: sudo mount -t msdos <file> <dest_folder> Maybe I'm wrong and this cannot be done. Anyway, the problem is this: I'm trying to create the file containing a FAT file system, and I'm running this command: sudo mkfs.vfat -F 32 -r 112 -S 512 -v -C "test.fat" 100 That, accordingly to the mkfs man page, will create a FAT32 file system with 112 rootdir entries, logical sector size of 512 bytes, 100 blocks in total, and save it into "test.fat". But it fails, and the bash tells me: mkfs.vfat: unable to create test.fat What is going on? I think I am misunderstanding how mkfs works and how to use it. It is possible to write a filesystem into a file?

    Read the article

  • Out Of Memory Error - Magento

    - by robobobobo
    Ok normally I understand when my server is giving me out of memory errors, but this one has me stumped! I'm running a magento based site, with one or two plugins in it and the rest is pretty basic. The site runs and loads fine wiht no issues. However in the backend - Configuration - Payment Methods it gives me the following out of memory error Fatal error: Out of memory (allocated 39059456) (tried to allocate 85 bytes) in ########/Varien/Simplexml/Element.php on line 84 Now this is where I'm confused..it's allocated more than it tried to allocate? Am I correct there? So how is it running out of memory? My server has 6Gb ram, an SSD and 2 CPU's running WHM with a few other low traffic sites on it. I set my php memory limit to 100mb, 1000mb and finally unlimited but all to no avail! I'm completely lost here, would really appreciate some expertise on this Cheers

    Read the article

  • What would cause different rates of packet loss between client and server in UDP?

    - by febreezey
    If I've implemented a reliable UDP file transfer protocol and I have a file that deliberately drops a percentage of packets when I transmit, why would it be more evident that transmission time increases as the packet loss percentage increases going from the client to server as opposed from the server to the client? Is this something that can be explained as a result of the protocol? Here are my numbers from two separate experiments. I kept the max packet size to 500 Bytes and the opposite direction packet loss to 5% with a 1 Megabyte file: Server to Client loss Percentage varied: 1 MB file, 500 b segments, client to server loss 5% 1% : 17253 ms 3% : 3388 ms 5% : 7252 ms 10% : 6229 ms 11% : 12346 ms 13% : 11282 ms 15% : 9252 ms 20% : 11266 ms Client to Server loss percentage varied 1 MB file, 500 b segments, server to client loss 5% 1%: 4227 ms 3%: 4334 ms 5%: 3308 ms 10%: 31350 ms 11%: 36398 ms 13%: 48436 ms 15%: 65475 ms 20%: 120515 ms You can clearly see an exponential increase in the client to server group

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >