Search Results

Search found 64711 results on 2589 pages for 'core data'.

Page 1015/2589 | < Previous Page | 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022  | Next Page >

  • How to synchronize HTML5 local/webStorage and server-side storage?

    - by thSoft
    I'm currently seeking solutions for transparently and automatically synchronizing and replicating across the client-side HTML5 localStorage or web storage and (maybe multiple) server-side storage(s) (the only requirement here that it should be simple and affordable to install on a regular hosting service). So do you have any experience with such libraries/technologies that offer data storage which automate the client-server storage synchronization and allow data to be available either offline or online or both? I think this is a fairly common scenario of web applications supporting offline mode...

    Read the article

  • tmpfile and gzip combination problem

    - by Vojtech R.
    I have problem with this code: file = tempfile.TemporaryFile(mode='wrb') file.write(base64.b64decode(data)) file.flush() os.fsync(file) # file.seek(0) f = gzip.GzipFile(mode='rb', fileobj=file) print f.read() I dont know why it doesn't print out anything. If I uncomment file.seek then error occurs: File "/usr/lib/python2.5/gzip.py", line 263, in _read self._read_gzip_header() File "/usr/lib/python2.5/gzip.py", line 162, in _read_gzip_header magic = self.fileobj.read(2) IOError: [Errno 9] Bad file descriptor Just for information this version works fine: x = open("test.gzip", 'wb') x.write(base64.b64decode(data)) x.close() f = gzip.GzipFile('test.gzip', 'rb') print f.read()

    Read the article

  • Any porting avaiable of backtrace for uclibc?

    - by user303967
    We are running the uclibc linux on ARM 9. The problem is uclibc doesn't support backtrace. When core-dump happen, cannot grap the call stack. Anyone has good solution on that? For example, an existing porting of backtrace for uclibc? or any good method to grap call stack when call dump happen(uclibc+ARM+Linux)? thanks

    Read the article

  • Binding Source suspends itself when I don't want it to.

    - by Scott Chamberlain
    I have two data tables set up in a Master-Details configuration with a relation "Ticket_CallSegments" between them. I also have two Binding Sources and a Data Grid View configured like this (Init Code) // // dgvTickets // this.dgvTickets.AllowUserToAddRows = false; this.dgvTickets.AllowUserToDeleteRows = false; this.dgvTickets.AllowUserToResizeRows = false; this.dgvTickets.AutoGenerateColumns = false; this.dgvTickets.ColumnHeadersHeightSizeMode = System.Windows.Forms.DataGridViewColumnHeadersHeightSizeMode.AutoSize; this.dgvTickets.Columns.AddRange(new System.Windows.Forms.DataGridViewColumn[] { this.cREATEDATEDataGridViewTextBoxColumn, this.contactFullNameDataGridViewTextBoxColumn, this.pARTIALNOTEDataGridViewTextBoxColumn}); this.dgvTickets.DataSource = this.ticketsDataSetBindingSource; this.dgvTickets.Dock = System.Windows.Forms.DockStyle.Fill; this.dgvTickets.Location = new System.Drawing.Point(0, 0); this.dgvTickets.MultiSelect = false; this.dgvTickets.Name = "dgvTickets"; this.dgvTickets.ReadOnly = true; this.dgvTickets.RowHeadersVisible = false; this.dgvTickets.SelectionMode = System.Windows.Forms.DataGridViewSelectionMode.FullRowSelect; this.dgvTickets.Size = new System.Drawing.Size(359, 600); this.dgvTickets.TabIndex = 0; // // ticketsDataSetBindingSource // this.ticketsDataSetBindingSource.DataMember = "Ticket"; this.ticketsDataSetBindingSource.DataSource = this.ticketsDataSet; this.ticketsDataSetBindingSource.CurrentChanged += new System.EventHandler(this.ticketsDataSetBindingSource_CurrentChanged); // // callSegementBindingSource // this.callSegementBindingSource.DataMember = "Ticket_CallSegments"; this.callSegementBindingSource.DataSource = this.ticketsDataSetBindingSource; this.callSegementBindingSource.Sort = "CreateDate"; //Function to update a rich text box. private void ticketsDataSetBindingSource_CurrentChanged(object sender, EventArgs e) { StringBuilder sb = new StringBuilder(); rtbTickets.Clear(); foreach (DataRowView drv in callSegementBindingSource) { TicketsDataSet.CallSegmentsRow row = (TicketsDataSet.CallSegmentsRow)drv.Row; sb.AppendLine("**********************************"); sb.AppendLine(String.Format("CreateDate: {1}, Created by: {0}", row.USERNAME, row.CREATEDATE)); sb.AppendLine("**********************************"); rtbTickets.SelectionFont = new Font("Arial", (float)11, FontStyle.Bold); rtbTickets.SelectedText = sb.ToString(); rtbTickets.SelectionFont = new Font("Arial", (float)11, FontStyle.Regular); rtbTickets.SelectedText = row.NOTES + "\n\n"; } } However when ticketsDataSetBindingSource_CurrentChanged gets called when I select a new row in my Data Grid View callSegementBindingSource.IsBindingSuspended is set to true and my text box does not update correctly (it seems to always pull from the same row in CallSegments). Can anyone see what I am doing wrong or tell me how to unsuspend the binding so it will pull the correct data?

    Read the article

  • IDynamicObject could not be found?!

    - by cvista
    When trying to run the sample code here: http://www.nikhilk.net/Live-Search-REST-API.aspx I get: Error 52 The type or namespace name 'IDynamicObject' could not be found (are you missing a using directive or an assembly reference?) E:\repo\NikhilK-dynamicrest-a93707a\NikhilK-dynamicrest-a93707a\Core\DynamicObject.cs 19 43 DynamicRest The project is running .net 4 - shouldn't this be a part of the standard imports? am i missing something? What do i need to do to make this work?

    Read the article

  • Logging raw HTTP request/response in ASP.NET MVC & IIS7

    - by Greg Beech
    I'm writing a web service (using ASP.NET MVC) and for support purposes we'd like to be able to log the requests and response in as close as possible to the raw, on-the-wire format (i.e including HTTP method, path, all headers, and the body) into a database. What I'm not sure of is how to get hold of this data in the least 'mangled' way. I can re-constitute what I believe the request looks like by inspecting all the properties of the HttpRequest object and building a string from them (and similarly for the response) but I'd really like to get hold of the actual request/response data that's sent on the wire. I'm happy to use any interception mechanism such as filters, modules, etc. and the solution can be specific to IIS7. However, I'd prefer to keep it in managed code only. Any recommendations? Edit: I note that HttpRequest has a SaveAs method which can save the request to disk but this reconstructs the request from the internal state using a load of internal helper methods that cannot be accessed publicly (quite why this doesn't allow saving to a user-provided stream I don't know). So it's starting to look like I'll have to do my best to reconstruct the request/response text from the objects... groan. Edit 2: Please note that I said the whole request including method, path, headers etc. The current responses only look at the body streams which does not include this information. Edit 3: Does nobody read questions around here? Five answers so far and yet not one even hints at a way to get the whole raw on-the-wire request. Yes, I know I can capture the output streams and the headers and the URL and all that stuff from the request object. I already said that in the question, see: I can re-constitute what I believe the request looks like by inspecting all the properties of the HttpRequest object and building a string from them (and similarly for the response) but I'd really like to get hold of the actual request/response data that's sent on the wire. If you know the complete raw data (including headers, url, http method, etc.) simply cannot be retrieved then that would be useful to know. Similarly if you know how to get it all in the raw format (yes, I still mean including headers, url, http method, etc.) without having to reconstruct it, which is what I asked, then that would be very useful. But telling me that I can reconstruct it from the HttpRequest/HttpResponse objects is not useful. I know that. I already said it. Please note: Before anybody starts saying this is a bad idea, or will limit scalability, etc., we'll also be implementing throttling, sequential delivery, and anti-replay mechanisms in a distributed environment, so database logging is required anyway. I'm not looking for a discussion of whether this is a good idea, I'm looking for how it can be done.

    Read the article

  • How to handle input and parameter validation between layers?

    - by developr
    If I have a 3 layer web forms application that takes user input, I know I can validate that input using validation controls in the presentation layer. Should I also validate in the business and data layers as well to protect against SQL injection and also issues? What validations should go in each layer? Another example would be passing a ID to return a record. Should the data layer ensure that the id is valid or should that happen in BLL / UI?

    Read the article

  • Using PHP's IMAP library triggers Kaspersky's Antivirus

    - by TMG
    Hello, I just started today working with PHP's IMAP library, and while imap_fetchbody or imap_body are called, it is triggering my Kaspersky antivirus. The viruses are Trojan.Win32.Agent.dmyq and Trojan.Win32.FraudPack.aoda. I am running this off a local development machine with XAMPP and Kaspersky AV. Now, I am sure there are viruses there since there is spam in the box (who doesn't need a some viagra or vicodin these days?). And I know that since the raw body includes attachments and different mime-types, bad stuff can be in the body. So my question is: are there any risks using these libraries? I am assuming that the IMAP functions are retrieving the body, caching it to disk/memory and the AV scanning it sees the data. Is that correct? Are there any known security concerns using this library (I couldn't find any)? Does it clean up cached message parts perfectly or might viral files be sitting somewhere? Is there a better way to get plain text out of the body than this? Right now I am using the following code (credit to Kevin Steffer): function get_mime_type(&$structure) { $primary_mime_type = array("TEXT", "MULTIPART","MESSAGE", "APPLICATION", "AUDIO","IMAGE", "VIDEO", "OTHER"); if($structure->subtype) { return $primary_mime_type[(int) $structure->type] . '/' .$structure->subtype; } return "TEXT/PLAIN"; } function get_part($stream, $msg_number, $mime_type, $structure = false, $part_number = false) { if(!$structure) { $structure = imap_fetchstructure($stream, $msg_number); } if($structure) { if($mime_type == get_mime_type($structure)) { if(!$part_number) { $part_number = "1"; } $text = imap_fetchbody($stream, $msg_number, $part_number); if($structure->encoding == 3) { return imap_base64($text); } else if($structure->encoding == 4) { return imap_qprint($text); } else { return $text; } } if($structure->type == 1) /* multipart */ { while(list($index, $sub_structure) = each($structure->parts)) { if($part_number) { $prefix = $part_number . '.'; } $data = get_part($stream, $msg_number, $mime_type, $sub_structure,$prefix . ($index + 1)); if($data) { return $data; } } // END OF WHILE } // END OF MULTIPART } // END OF STRUTURE return false; } // END OF FUNCTION $connection = imap_open($server, $login, $password); $count = imap_num_msg($connection); for($i = 1; $i <= $count; $i++) { $header = imap_headerinfo($connection, $i); $from = $header->fromaddress; $to = $header->toaddress; $subject = $header->subject; $date = $header->date; $body = get_part($connection, $i, "TEXT/PLAIN"); }

    Read the article

  • JSON find in JavaScript

    - by zapping
    Is there a better way other than looping to find data in JSON? It's for edit and delete. for(var k in objJsonResp) { if (objJsonResp[k].txtId == id) { if (action == 'delete') { objJsonResp.splice(k,1); } else { objJsonResp[k] = newVal; } break; } } The data is arranged as list of maps. Like: [{id:value, pId:value, cId:value,...}, {id:value, pId:value, cId:value}, ...]

    Read the article

  • ntop to analyse bandwidth usage on multiple ASA 5505

    - by dunxd
    I have set up a netflow server at our data centre, which is connected via VPN to ~40 remote offices using Cisco ASA 5505. The aim is to analyse usage data and find out exactly how the remote connections are being used. I followed through http://techowto.files.wordpress.com/2008/09/ntop-guide.pdf to set up ntop and https://supportforums.cisco.com/docs/DOC-6114 to set up the ASAs. I can see from the Plugin Netflow Statistics page that netflow packets from my ASAs are being received - the counter is increasing. However, I am not seeing any breakdown on the Global Traffic Statistic page after switching to the Netflow interface. I'm just seeing a pie chart showing 100% traffic for eth0. The interfaces and documentation are a little hard to follow so I am not sure I have got things configured correctly. When setting up my NetFlow-device.2 I can specify Virtual NetFlow Interface Network Address - the web UI says This value is in the form of a network address and mask on the network where the actual NetFlow probe is located. is this a Network address (e.g. 192.168.0.0/24) or an actual host IP address (192.167.0.1/24)? If that should be a network address, is this the network in which one of my ASAs is or the network in which my ntop server is? If a host IP address, is this the IP address used by eth0 on my ntop server, the IP address of an ASA, or something else? Do I need a separate virtual interface for each ASA I am collecting netflow data from? Any guidance would be greatly welcome.

    Read the article

  • How to automatically create Word documents which include list fields from a custom SharePoint list?

    - by Marius
    Hi, Is it possible to automatically create Word documents which include list fields from a custom SharePoint list? here is the scenario: - custom list (over 100 columns) - Word templates (not sure where is best to store them yet) - Entry Form will provide data for the templates (or partial data, ie Client name, Sales Rep) - a form that will have buttons (ie 'Create Order Form', 'Create PO') the idea is to be able to generate partial populated templates from a custom list with a puch of a button. All solutions are realy appreciated!!! Thanks,

    Read the article

  • How to sync two computers using new MobileMe calendar

    - by CesarGon
    I have been using MobileMe for over a year with success. I use it to sync my Outlook calendars in my work and home computers, using Windows 7 and Outlook 2007. The main Outlook calendar folder in my work computer is replicated to MobileMe as "Work", and synced to my home computer, and the main calendar folder in my home computer is replicated to MobileMe as "Home", and synced to my work computer. This means that I can see both "Work" and "Home" calendars from both computers (as well as from the web interface through me.com), which is very convenient. Yesterday I migrated to the new MobileMe calendar, accepting the suggestion that popped up on the me.com website. After the migration, the MobileMe control panel on each of Windows computers asked me to re-configure my calendar setup, and everything fell apart. The "Home" and "Work" calendar folders in Outlook are now ignored by MobileMe, and new ones named "Home in MobileMe" and "Work in MobileMe" have been created, and placed in a separate Outlook data file rather than the default. This means that now: I now have four folders, two of which are not replicated to MobileMe The two folders that are not replicated reside on a separate data file, so alarms and reminders don't work; they're basically useless to me as calendar folders In addition, the button in the MobileMe Control Panel that used to let me specify what MobileMe folder should be synced against the default Outlook folder has gone. MobileMe is now too smart. Do you have any idea how to undo this mess and go back to a situation where I have two folders, as described in the top paragraph, which keep synced? I don't want an extra data file. Thanks.

    Read the article

  • Populate an SQL Server 2k8 with Oracle Loader files

    - by Techpriester
    Hi folks. Here's the problem: I have a project that needs to be migrated to Microsoft SQL Server 2008. We have data in text files for the Oracle SQL Loader and now we need to get that data into the SQL Server DB. I could write a program that converts everything into INSERT statements but there has to be a more comfortable way to so this. Any suggestions? PS: I don't think my company wants to buy additional Software to do this job so that's out.

    Read the article

  • Postfix: Using google apps for stmp errors

    - by Zed Said
    I am using postfix and need to send the mail using google apps smtp. I am getting errors after I thought I had set everything up correctly: May 11 09:50:57 zedsaid postfix/error[22214]: 00E009693FB: to=<[email protected]>, relay=none, delay=2466, delays=2462/3.4/0/0.06, dsn=4.7.0, status=deferred (delivery temporarily suspended: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.155.109]: no mechanism available) May 11 09:50:57 zedsaid postfix/error[22213]: 0ACB36D1B94: to=<[email protected]>, relay=none, delay=2486, delays=2482/3.4/0/0.06, dsn=4.7.0, status=deferred (delivery temporarily suspended: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.155.109]: no mechanism available) May 11 09:50:57 zedsaid postfix/error[22232]: 067379693D3: to=<[email protected]>, relay=none, delay=2421, delays=2417/3.4/0/0.06, dsn=4.7.0, status=deferred (delivery temporarily suspended: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.155.109]: no mechanism available) main.cf: # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters #smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem #smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = zedsaid.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = #relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all delay_warning_time = 4h smtpd_recipient_limit = 16 # how many error before back off. smtpd_soft_error_limit = 3 # how many max errors before blocking it. smtpd_hard_error_limit = 12 ## Gmail Relay relayhost = [smtp.gmail.com]:587 smtp_use_tls = yes smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_sasl_mechanism_filter = login smtp_tls_eccert_file = smtp_tls_eckey_file = smtp_use_tls = yes smtp_enforce_tls = no smtp_tls_CAfile = /etc/postfix/cacert.pem smtpd_tls_received_header = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport debug_peer_list = smtp.gmail.com debug_peer_level = 3 What am I doing wrong?

    Read the article

  • jQuery user intput to control option of one jquery function

    - by Tristan
    Hello, i'd like an input to control that : jQuery.ajax({ type: "get", dataType: "jsonp", url: "http://www.foo.com/something.php", data: {numberInput: "NUMER I WANT TO CONTROL" }, On the HTML side i've <input type="text id="jqueryControl" /> I want when a user enters a number ito the jqueryControl to insert it in the .ajax function and reload the data according to the new value entered. Any idea to do that please ? Thanks

    Read the article

  • Problems mounting HPUX LVM+VXFS filesystem on Linux

    - by golimar
    I have a physical disk from a HPUX system that I need to access from a Debian Linux for ia64 system. From the hpux-lvm-tools project I have the tools to access the HPUX LVMs (Linux LVM has a different format) and I also have the freevxfs driver. I know beforehand that the disk has three partitions, and that the biggest one contains LVM volumes, and some of those are VxFS filesystems. I can see the partitions: # cat /proc/partitions major minor #blocks name 8 32 143374744 sdc 8 33 512000 sdc1 8 34 142452736 sdc2 8 35 409600 sdc3 It finds a VG in one of the disk partitions: # ./vgscan_hpux On /dev/sdc2 - vg1328874723 # ./pvdisplay_hpux /dev/sdc2 PV General Information ---------------------- VG Creation Time Fri Feb 10 12:52:03 2012 Physical Volume ID 1766760336 1328874723 Volume Group ID 1766760336 1328874723 Physical Volumes in VG 1766760336 1328874723 VG Actication Mode 0 - LOCAL PE Size 64 MBs Lvol sizes ---------- lvol1 - 8 Extents - 512 MBs lvol2 - 192 Extents - 12288 MBs lvol3 - 16 Extents - 1024 MBs ... lvol21 - 13 Extents - 832 MBs lvol22 - 224 Extents - 14336 MBs lvol23 - 16 Extents - 1024 MBs Then I activate that VG and some new devices appear in my system: # ./pvactivate_hpux /dev/sdc2 VG vg1328874723 Activated succesfully with 23 lvols. # # ll /dev/mapper/ total 0 crw------- 1 root root 10, 59 Nov 26 16:08 control lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol1 -> ../dm-0 lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol10 -> ../dm-9 ... lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol8 -> ../dm-7 lrwxrwxrwx 1 root root 7 Nov 26 16:38 vg1328874723-lvol9 -> ../dm-8 But: # mount /dev/mapper/vg1328874723-lvol18 /mnt/tmp mount: you must specify the filesystem type # mount -t vxfs /dev/mapper/vg1328874723-lvol18 /mnt/tmp mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg1328874723-lvol18, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so # lsmod |grep vxfs freevxfs 23905 0 I also tried to identify the raw data with the file command and it just says 'data': # file -s /dev/mapper/vg1328874723-lvol18 /dev/mapper/vg1328874723-lvol18: symbolic link to `../dm-17' # file -s /dev/dm-17 /dev/dm-17: data # Any clues?

    Read the article

  • Memory layout of executable

    - by Ross
    Hi all, When loading an executable then segments like the code, data, bss and so on need to be placed in memory. I am just wondering, if someone could tell me where on a standard x86 for example the libc library is placed. Is that at the top or bottom of memory. My guess is at the bottom, close to the application code, ie., that would look something like this here: --------- 0x1000 Stack | V ^ | Heap ---------- Data + BSS ---------- App Code ---------- libc ---------- 0x0000 Thanks a lot, Ross

    Read the article

  • How do I hook into Tar with BASH?

    - by orb
    Long Story Short I am working with Tar archives that contain PNG images in base64 encoding. I would like to use BASH (or whatever else works) to hook into the extraction function of Tar to decode PNG images from base64 encoding to standard PNG encoding after the files are unpacked. A simple cat $input-file | base64 -d >$output-file will successfully decode the images. Is there a way I can hook into tar -xf so that users do not have to do any (or minimal) extra work to decode the images? In the GNU Tar documentation (http://www.gnu.org/software/tar/manual/html_chapter/Backups.html#SEC97) I found that there are in fact variables reserved to hold the names of functions I desire to be hooked into various moments in Tar program execution. However, the documentation explains that these variables, along with other variables that can be set to configure Tar, are located in a file named backup-specs. Unfortunately, the path to this file is not given. Further, running sudo find / -name backup-specs tells me that this file is not present on my Ubuntu version 13.04 system. Background Information not included in the Long Story Short I have been working on a browser-based (WebGL) particle effect creation application (http://www.particleeffect.org), (https://github.com/cgrabowski/webgl-particle-effect-editor), (https://github.com/cgrabowski/webgl-particle-effect). I have began to write a client-side-only solution for saving and loading effect data as a tar archive. However, since client-side JavaScript has limited capability to process binary data, the images used as textures in the effect are saved with base64 encoding. I have been able to implement saving effect data as a Tar archive (haven't pushed that to Github yet). However, the images present in said Tar archive cannot be manipulated unless they are decoded from base64 encoding.

    Read the article

  • gpg symmetric encryption using pipes

    - by Thomas
    I'm trying to generate keys to lock my drive (using DM-Crypt with LUKS) by pulling data from /dev/random and then encrypting that using GPG. In the guide I'm using, it suggests using the following command: dd if=/dev/random count=1 | gpg --symmetric -a >./[drive]_key.gpg If you do it without a pipe, and feed it a file, it will pop up an (n?)curses prompt for you to type in a password. However when I pipe in the data, it repeats the following message four times and sits there frozen: pinentry-curses: no LC_CTYPE known assuming UTF-8 It also says can't connect to '/root/.gnupg/S.gpg-agent': File or directory doesn't exist, however I am assuming that this doesn't have anything to do with it, since it shows up even when the input is from a file. So I guess my question boils down to this: is there a way to force gpg to accept the passphrase from the command line, or in some other way get this to work, or will I have to write the data from /dev/random to a temporary file, and then encrypt that file? (Which as far as I know should be alright due to the fact that I'm doing this on the LiveCD and haven't yet created the swap, so there should be no way for it to be written to disk.)

    Read the article

  • Processing incorrect mac addresses from 802.11 frames with pcap

    - by Quentin Swain
    I'm working throurgh a project with pcap and wireless. Following an example posted in response to oe of my earlier questions I am trying to extract the mac addresses from wireless frames. I have created structures for the radiotap header and a basic management frame. For some reason when it comes to trying to output the mac addresses I am printing out the wrong data. When I compare to wireshark I don't see why the radio tap data is printing out correctly but the mac addresses are not. I don't see any additional padding in the hex dump that wireshark displays when i look at the packets and compare the packets that I have captured. I am somewhat famialar with c but not an expert so maybe I am not using the pointers and structures properly could someone help show me what I am doing wrong? Thanks, Quentin // main.c // MacSniffer // #include <pcap.h> #include <string.h> #include <stdlib.h> #define MAXBYTES2CAPTURE 65535 #ifdef WORDS_BIGENDIAN typedef struct frame_control { unsigned int subtype:4; /*frame subtype field*/ unsigned int protoVer:2; /*frame type field*/ unsigned int version:2; /*protocol version*/ unsigned int order:1; unsigned int protected:1; unsigned int moreDate:1; unsigned int power_management:1; unsigned int retry:1; unsigned int moreFrag:1; unsigned int fromDS:1; unsigned int toDS:1; }frame_control; struct ieee80211_radiotap_header{ u_int8_t it_version; u_int8_t it_pad; u_int16_t it_len; u_int32_t it_present; u_int64_t MAC_timestamp; u_int8_t flags; u_int8_t dataRate; u_int16_t channelfrequency; u_int16_t channFreq_pad; u_int16_t channelType; u_int16_t channType_pad; u_int8_t ssiSignal; u_int8_t ssiNoise; u_int8_t antenna; }; #else typedef struct frame_control { unsigned int protoVer:2; /* protocol version*/ unsigned int type:2; /*frame type field (Management,Control,Data)*/ unsigned int subtype:4; /* frame subtype*/ unsigned int toDS:1; /* frame coming from Distribution system */ unsigned int fromDS:1; /*frame coming from Distribution system */ unsigned int moreFrag:1; /* More fragments?*/ unsigned int retry:1; /*was this frame retransmitted*/ unsigned int powMgt:1; /*Power Management*/ unsigned int moreDate:1; /*More Date*/ unsigned int protectedData:1; /*Protected Data*/ unsigned int order:1; /*Order*/ }frame_control; struct ieee80211_radiotap_header{ u_int8_t it_version; u_int8_t it_pad; u_int16_t it_len; u_int32_t it_present; u_int64_t MAC_timestamp; u_int8_t flags; u_int8_t dataRate; u_int16_t channelfrequency; u_int16_t channelType; int ssiSignal:8; int ssiNoise:8; }; #endif struct wi_frame { u_int16_t fc; u_int16_t wi_duration; u_int8_t wi_add1[6]; u_int8_t wi_add2[6]; u_int8_t wi_add3[6]; u_int16_t wi_sequenceControl; // u_int8_t wi_add4[6]; //unsigned int qosControl:2; //unsigned int frameBody[23124]; }; void processPacket(u_char *arg, const struct pcap_pkthdr* pkthdr, const u_char* packet) { int i= 0, *counter = (int *) arg; struct ieee80211_radiotap_header *rh =(struct ieee80211_radiotap_header *)packet; struct wi_frame *fr= (struct wi_frame *)(packet + rh->it_len); u_char *ptr; //printf("Frame Type: %d",fr->wi_fC->type); printf("Packet count: %d\n", ++(*counter)); printf("Received Packet Size: %d\n", pkthdr->len); if(rh->it_version != NULL) { printf("Radiotap Version: %d\n",rh->it_version); } if(rh->it_pad!=NULL) { printf("Radiotap Pad: %d\n",rh->it_pad); } if(rh->it_len != NULL) { printf("Radiotap Length: %d\n",rh->it_len); } if(rh->it_present != NULL) { printf("Radiotap Present: %c\n",rh->it_present); } if(rh->MAC_timestamp != NULL) { printf("Radiotap Timestamp: %u\n",rh->MAC_timestamp); } if(rh->dataRate != NULL) { printf("Radiotap Data Rate: %u\n",rh->dataRate); } if(rh->channelfrequency != NULL) { printf("Radiotap Channel Freq: %u\n",rh->channelfrequency); } if(rh->channelType != NULL) { printf("Radiotap Channel Type: %06x\n",rh->channelType); } if(rh->ssiSignal != NULL) { printf("Radiotap SSI signal: %d\n",rh->ssiSignal); } if(rh->ssiNoise != NULL) { printf("Radiotap SSI Noise: %d\n",rh->ssiNoise); } ptr = fr->wi_add1; int k= 6; printf("Destination Address:"); do{ printf("%s%X",(k==6)?" ":":",*ptr++); } while(--k>0); printf("\n"); ptr = fr->wi_add2; k=0; printf("Source Address:"); do{ printf("%s%X",(k==6)?" ":":",*ptr++); }while(--k>0); printf("\n"); ptr = fr->wi_add3; k=0; do{ printf("%s%X",(k==6)?" ":":",*ptr++); } while(--k>0); printf("\n"); /* for(int j = 0; j < 23124;j++) { if(fr->frameBody[j]!= NULL) { printf("%x",fr->frameBody[j]); } } */ for (i = 0;i<pkthdr->len;i++) { if(isprint(packet[i +rh->it_len])) { printf("%c",packet[i + rh->it_len]); } else{printf(".");} //print newline after each section of the packet if((i%16 ==0 && i!=0) ||(i==pkthdr->len-1)) { printf("\n"); } } return; } int main(int argc, char** argv) { int count = 0; pcap_t* descr = NULL; char errbuf[PCAP_ERRBUF_SIZE], *device = NULL; struct bpf_program fp; char filter[]="wlan broadcast"; const u_char* packet; memset(errbuf,0,PCAP_ERRBUF_SIZE); device = argv[1]; if(device == NULL) { fprintf(stdout,"Supply a device name "); } descr = pcap_create(device,errbuf); pcap_set_rfmon(descr,1); pcap_set_promisc(descr,1); pcap_set_snaplen(descr,30); pcap_set_timeout(descr,10000); pcap_activate(descr); int dl =pcap_datalink(descr); printf("The Data Link type is %s",pcap_datalink_val_to_name(dl)); //pcap_dispatch(descr,MAXBYTES2CAPTURE,1,512,errbuf); //Open device in promiscuous mode //descr = pcap_open_live(device,MAXBYTES2CAPTURE,1,512,errbuf); /* if(pcap_compile(descr,&fp,filter,0,PCAP_NETMASK_UNKNOWN)==-1) { fprintf(stderr,"Error compiling filter\n"); exit(1); } if(pcap_setfilter(descr,&fp)==-1) { fprintf(stderr,"Error setting filter\n"); exit(1); } */ pcap_loop(descr,0, processPacket, (u_char *) &count); return 0; }

    Read the article

  • How to backup a NAS drive to a USB drive?

    - by Tim Murphy
    How would you backup 600+ GB of data on a NAS (Network-Attached Storage) drive to a USB external drive? The NAS drive does not contain mission critical data nonetheless I wish to make weekly copies of it just in case. The NAS drive is almost exclusively used as an archive dump and is rarely updated. However the backup strategy used must have a simple restore procedure so I can confidently say the data now on the NAS drive is exactly how it was at the time of backup. I did try xcopy but seemed like it would take many-many hours and eventually crashed with insufficient memory. http://www.ctunion.com/node/114 suggests I would need to use xxcopy instead due to folder/file name lengths. My concern with xcopy/xxcopy is the length of time it takes. Hoping something else is faster. NAS drive is DLink DNS-313. 1TB drive installed. Connected to router via Ethernet cable. USB drive is Seagate 1TB. Can be connected to Windows Vista (preferred) or Windows 7 PCs. Both PCs are usually connected Wirelessly however ethernet cable can be used during backup to speed up the process.

    Read the article

< Previous Page | 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022  | Next Page >