Search Results

Search found 13676 results on 548 pages for 'python dude'.

Page 388/548 | < Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >

  • Rewriting Live TCP/IP (Layer 4) (i.e. Socket Layer) Streams

    - by user213060
    I have a simple problem which I'm sure someone here has done before... I want to rewrite Layer 4 TCP/IP streams (Not lower layer individual packets or frames.) Ettercap's etterfilter command lets you perform simple live replacements of Layer 4 TCP/IP streams based on fixed strings or regexes. Example ettercap scripting code: if (ip.proto == TCP && tcp.dst == 80) { if (search(DATA.data, "gzip")) { replace("gzip", " "); msg("whited out gzip\n"); } } if (ip.proto == TCP && tcp.dst == 80) { if (search(DATA.data, "deflate")) { replace("deflate", " "); msg("whited out deflate\n"); } } http://ettercap.sourceforge.net/forum/viewtopic.php?t=2833 I would like to rewrite streams based on my own filter program instead of just simple string replacements. Anyone have an idea of how to do this? Is there anything other than Ettercap that can do live replacement like this, maybe as a plugin to a VPN software or something? I would like to have a configuration similar to ettercap's silent bridged sniffing configuration between two Ethernet interfaces. This way I can silently filter traffic coming from either direction with no NATing problems. Note that my filter is an application that acts as a pipe filter, similar to the design of unix command-line filters: >[eth0] <----------> [my filter] <----------> [eth1]< What I am already aware of, but are not suitable: Tun/Tap - Works at the lower packet layer, I need to work with the higher layer streams. Ettercap - I can't find any way to do replacements other than the restricted capabilities in the example above. Hooking into some VPN software? - I just can't figure out which or exactly how. libnetfilter_queue - Works with lower layer packets, not TCP/IP streams. Again, the rewriting should occur at the transport layer (Layer 4) as it does in this example, instead of a lower layer packet-based approach. Exact code will help immensely! Thanks!

    Read the article

  • concatenate multi values in one record without duplication

    - by mikehjun
    I have a dbf table like below which is the result of one to many join from two tables. I want to have unique zone values from one Taxlot id field. table name: input table tid ----- zone 1 ------ A 1 ------ A 1 ------ B 1 ------ C 2 ------ D 2 ------ E 3 ------ C Desirable output table table name: input table tid ----- zone 1 ------ A, B, C 2 ------ D, E 3 ------ C I got some help but couldn't make it to work. inputTbl = r"C:\temp\input.dbf" taxIdZoningDict = {} searchRows = gp.searchcursor(inputTbl) searchRow = searchRows.next() while searchRow: if searchRow.TID in taxIdZoningDict: taxIdZoningDict[searchRow.TID].add(searchRow.ZONE) else: taxIdZoningDict[searchRow.TID] = set() #a set prevents dulpicates! taxIdZoningDict[searchRow.TID].add(searchRow.ZONE) searchRow = searchRows.next() outputTbl = r"C:\temp\output.dbf" gp.CreateTable_management(r"C:\temp", "output.dbf") gp.AddField_management(outputTbl, "TID", "LONG") gp.AddField_management(outputTbl, "ZONES", "TEXT", "", "", "20") tidList = taxIdZoningDict.keys() tidList.sort() #sorts in ascending order insertRows = gp.insertcursor(outputTbl) for tid in tidList: concatString = "" for zone in taxIdZoningDict[tid] concatString = concatString + zone + "," insertRow = insertRows.newrow() insertRow.TID = tid insertRow.ZONES = concatString[:-1] insertRows.insertrow(insertRow) del insertRow del insertRows

    Read the article

  • How to convert list into a string?

    - by PARIJAT
    I have extracted some data from the file and want to write it in the file 2 but the program says 'sequence item 1: expected string, list found', I want to know how I can convert buffer[] i.e. string into sequence, so that it could be saved in file 2. file = open('/ddfs/user/data/k/ktrip_01/hmm.txt','r') file2 = open('/ddfs/user/data/k/ktrip_01/hmm_write.txt','w') buffer = [] rec = file.readlines() for line in rec : field = line.split() print '>',field[0] term = field[0] buffer.append(term) print field[1], field[2], field[6], field[12] term1 = field [1] buffer.append(term1) term2 = field[2] buffer.append[term2] term3 = field[6] buffer.append[term3] term4 = field[12] buffer.append[term4] file2.write(buffer) file.close() file2.close()

    Read the article

  • removing the single quote from a list of

    - by tanky
    I need to append/format a url with a list of ids for an api call however, when i put the list at the end of the api https://api.twitter.com/1.1/users/lookup.json?user_id=%s'%a i just get an empty string as a response. i have tried turning the list into a string and removing the square brackets doing a = str(followers['ids'])[1:-1] but i still get the same problem. and im assuming that its being caused by the single quote at the start. i have tried removing the apostrophe from the string doing a.replace("'", "") and now i have run out of ideas thanks

    Read the article

  • Database Error django

    - by Megan
    DatabaseError at /admin/delmarva/event/ no such column: delmarva_event.eventdate I created a class in my models.py file: from django.db import models from django.contrib.auth.models import User class Event(models.Model): eventname = models.CharField(max_length = 100) eventdate = models.DateField() eventtime = models.TimeField() address = models.CharField(max_length = 200) user = models.ForeignKey(User) def __unicode__(self): return self.eventname and now when i try to view my events in my admin or my main_page it gives me the error that there is no eventdate. I tried syncing the db again but nothing changed. Also, I hashtagged eventdate out to see if I get a different error and then it states that delmarva_event.eventtime does not exist as well. I It is weird because it does not have a problem with eventname. Any suggestions would be greatly appreciated!

    Read the article

  • Turbogears 2 vs Django - any advice on choosing replacement for Turbogears 1?

    - by michela
    I have been using Turbogears 1 for prototyping small sites for the last couple of years and it is getting a little long in the tooth. Any suggestions on making the call between upgrading to Turbogears 2 or switching to something like Django? I'm torn between the familiarity of the TG community who are pretty responsive and do pretty good documentation vs the far larger community using Django. I am quite tempted by the built-in CMS features and the Google AppEngine support. Any advice? Thanks .M.

    Read the article

  • Elegant Disjunctive Normal Form in Django

    - by Mike
    Let's say I've defined this model: class Identifier(models.Model): user = models.ForeignKey(User) key = models.CharField(max_length=64) value = models.CharField(max_length=255) Each user will have multiple identifiers, each with a key and a value. I am 100% sure I want to keep the design like this, there are external reasons why I'm doing it that I won't go through here, so I'm not interested in changing this. I'd like to develop a function of this sort: def get_users_by_identifiers(**kwargs): # something goes here return users The function will return all users that have one of the key=value pairs specified in **kwargs. Here's an example usage: get_users_by_identifiers(a=1, b=2) This should return all users for whom a=1 or b=2. I've noticed that the way I've set this up, this amounts to a disjunctive normal form...the SQL query would be something like: SELECT DISTINCT(user_id) FROM app_identifier WHERE (key = "a" AND value = "1") OR (key = "b" AND value = "2") ... I feel like there's got to be some elegant way to take the **kwargs input and do a Django filter on it, in just 1-2 lines, to produce this result. I'm new to Django though, so I'm just not sure how to do it. Here's my function now, and I'm completely sure it's not the best way to do it :) def get_users_by_identifiers(**identifiers): users = [] for key, value in identifiers.items(): for identifier in Identifier.objects.filter(key=key, value=value): if not identifier.user in users: users.append(identifier.user) return users Any ideas? :) Thanks!

    Read the article

  • Django-admin.py not being recognized suddenly

    - by Jen Camara
    I tried starting a new Django project yesterday but when I did "django-admin.py startproject projectname" I got an error stating: "django-admin.py is not recognized as an internal or external command." The strange thing is, when I first installed Django, I made a few projects and everything worked fine. But now after going back a few months later it has suddenly stopped working. I've tried looking around for an answer and all I could find is that this typically has to do with the system path settings, however, I know that I have the proper paths set up so I don't understand what's happening. Does anybody have any idea what's going on?

    Read the article

  • gae error:AttributeError: 'NoneType' object has no attribute 'user_is_member'

    - by zjm1126
    class Thread(db.Model): members = db.StringListProperty() def user_is_member(self, user): return str(user) in self.members and thread = Thread.get(db.Key.from_path('Thread', int(id))) is_member = thread.user_is_member(user) but the error is : Traceback (most recent call last): File "D:\Program Files\Google\google_appengine\google\appengine\ext\webapp\__init__.py", line 511, in __call__ handler.get(*groups) File "D:\Program Files\Google\google_appengine\google\appengine\ext\webapp\util.py", line 62, in check_login handler_method(self, *args) File "D:\zjm_code\forum_blog_gae\main.py", line 222, in get is_member = thread.user_is_member(user) AttributeError: 'NoneType' object has no attribute 'user_is_member' why ? thanks

    Read the article

  • more efficient way to pickle a string

    - by gatoatigrado
    The pickle module seems to use string escape characters when pickling; this becomes inefficient e.g. on numpy arrays. Consider the following z = numpy.zeros(1000, numpy.uint8) len(z.dumps()) len(cPickle.dumps(z.dumps())) The lengths are 1133 characters and 4249 characters respectively. z.dumps() reveals something like "\x00\x00" (actual zeros in string), but pickle seems to be using the string's repr() function, yielding "'\x00\x00'" (zeros being ascii zeros). i.e. ("0" in z.dumps() == False) and ("0" in cPickle.dumps(z.dumps()) == True)

    Read the article

  • How to create a non-persistent Elixir/SQLAlchemy object?

    - by siebert
    Hi, because of legacy data which is not available in the database but some external files, I want to create a SQLAlchemy object which contains data read from the external files, but isn't written to the database if I execute session.flush() My code looks like this: try: return session.query(Phone).populate_existing().filter(Phone.mac == ident).one() except: return self.createMockPhoneFromLicenseFile(ident) def createMockPhoneFromLicenseFile(self, ident): # Some code to read necessary data from file deleted.... phone = Phone() phone.mac = foo phone.data = bar phone.state = "Read from legacy file" phone.purchaseOrderPosition = self.getLegacyOrder(ident) # SQLAlchemy magic doesn't seem to work here, probably because we don't insert the created # phone object into the database. So we set the id fields manually. phone.order_id = phone.purchaseOrderPosition.order_id phone.order_position_id = phone.purchaseOrderPosition.order_position_id return phone Everything works fine except that on a session.flush() executed later in the application SQLAlchemy tries to write the created Phone object to the database (which fortunatly doesn't succeed, because phone.state is longer than the data type allows), which breaks the function which issues the flush. Is there any way to prevent SQLAlchemy from trying to write such an object? Ciao, Steffen

    Read the article

  • Extract all files with directory path in given directory

    - by gaurav
    I have a tar archive in which I have a directory which I need to extract in a given directory. For example: I have a directory TarPrefix/x/y/z in a tar archive I want to extract it in a given target directory for example: extracted/a/ this directory should contain all the files and directories contained in directory TarPrefix/x/y/z. subdir_and_files = [ tarinfo for tarinfo in tar.getmembers() if tarinfo.name.startswith("subfolder/") ] to get the list of all the members in the directory path "subfolder/" and then I extract it using tar.extractall(extracted/a,subdir_and_files) but it extracts all the members with their directory path For example this results in extracted/a/x/y/z. Could you please help me in extracting these files in the given folder.

    Read the article

  • error in fetching url data

    - by Rahul s
    from google.appengine.ext import webapp from google.appengine.ext.webapp import util from google.appengine.ext import db from google.appengine.api import urlfetch class TrakHtml(db.Model): hawb = db.StringProperty(required=False) htmlData = db.TextProperty() class MainHandler(webapp.RequestHandler): def get(self): Traks = list() Traks.append('93332134') #Traks.append('91779831') #Traks.append('92782244') #Traks.append('38476214') for st in Traks : trak = TrakHtml() trak.hawb = st url = 'http://etracking.cevalogistics.com/eTrackResultsMulti.aspx?sv='+st result = urlfetch.fetch(url) self.response.out.write(result.read()) trak.htmlData = result.read() trak.put() result.read() is not giving whole file , it giving some portion. trak.htmlData is a textproparty() so it have to store whole file and i want that only

    Read the article

  • How to skip interstitial in a django view if a user hits the back button?

    - by Jose Boveda
    I have an application with an interstitial page to hold the user while an intensive operation runs in the background (takes anywhere from 30 secs to 1 minute). Once the operation is done, the user is redirected to the results page. Once on the result page, typical user behavior is to hit the 'back' button to perform the operation on a different input set. However, the back button takes them to the interstitial, not the original form. The desired behavior is to go back to the original form, skipping the interstitial entirely. I'd like this to be default behavior if the user goes to the interstitial page from anywhere but the original form. I thought I could create this by using the @never_cache function decorator in my view for the interstitial, and logic based on request.META['HTTP_REFERER'], however the page doesn't respect these. The browser's back button still trumps this behavior. Any ideas on how to solve this issue?

    Read the article

  • Serving large generated files using Google App Engine?

    - by John Carter
    Hiya, Presently I have a GAE app that does some offline processing (backs up a user's data), and generates a file that's somewhere in the neighbourhood of 10 - 100 MB. I'm not sure of the best way to serve this file to the user. The two options I'm considering are: Adding some code to the offline processing code that 'spoofs' it as a form upload to the blob store, and going thru the normal blobstore process to serve the file. Having the offline processing code store the file somewhere off of GAE, and serving it from there. Is there a much better approach I'm overlooking? I'm guessing this is functionality that isn't well suited to GAE. I had thought of storing in the datastore as db.Text or Dd.Blob but there I encounter the 1 MB limit. Any input would be appreciated,

    Read the article

  • Modify passed, nested dict/list

    - by Gerenuk
    I was thinking of writing a function to normalize some data. A simple approach is def normalize(l, aggregate=sum, norm_by=operator.truediv): aggregated=aggregate(l) for i in range(len(l)): l[i]=norm_by(l[i], aggregated) l=[1,2,3,4] normalize(l) l -> [0.1, 0.2, 0.3, 0.4] However for nested lists and dicts where I want to normalize over an inner index this doesnt work. I mean I'd like to get l=[[1,100],[2,100],[3,100],[4,100]] normalize(l, ?? ) l -> [[0.1,100],[0.2,100],[0.3,100],[0.4,100]] Any ideas how I could implement such a normalize function? Maybe it would be crazy cool to write normalize(l[...][0]) Is it possible to make this work?? Or any other ideas? Also not only lists but also dict could be nested. Hmm... EDIT: I just found out that numpy offers such a syntax (for lists however). Anyone know how I would implement the ellipsis trick myself?

    Read the article

  • Tkinter mouse event initially triggered

    - by user3714884
    I'm currently learning Tkinter and I cannot find a solution for my problem here nor outside Stackoverflow. In a nutshell, all events that I bind to my widgets are triggered initialy and don't respond to my actions. In this example, the red rectangle appears on the canvas when I run the code, and color=random.choice(['red', 'blue']) revealed that the event binding doesn't work after that: import Tkinter as tk class application(tk.Frame): def __init__(self, master=None): tk.Frame.__init__(self, master) self.can = tk.Canvas(master, width=200, height=200) self.can.bind('<Button-2>', self.draw()) self.can.grid() def draw(self): self.can.create_rectangle(50, 50, 100, 100, fill='red') app = application() app.mainloop() I use a Mac platform, but I haven't got a clue about its role in the problem. Could anyone please point me at the mistake i did here?

    Read the article

  • problem on strings, tuple strings

    - by suresh
    Write a function, called constrainedMatchPair which takes three arguments: a tuple representing starting points for the first substring, a tuple representing starting points for the second substring, and the length of the first substring. The function should return a tuple of all members (call it n) of the first tuple for which there is an element in the second tuple (call it k) such that n+m+1 = k, where m is the length of the first substring. Complete the definition def constrainedMatchPair(firstMatch,secondMatch,length):

    Read the article

< Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >