Search Results

Search found 17966 results on 719 pages for 'xml parsing'.

Page 511/719 | < Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >

  • Incremental deploy from a shell script

    - by WishCow
    I have a project, where I'm forced to use ftp as a means of deploying the files to the live server. I'm developing on linux, so I hacked together a bash script that makes a backup of the ftp server's contents, deletes all the files on the ftp, and uploads all the fresh files from the mercurial repository. (and taking care of user uploaded files and folders, and making post-deploy changes, etc) It's working well, but the project is starting to get big enough to make the deployment process too long. I'd like to modify the script to look up which files have changed, and only deploy the modified files. (the backup is fine atm as it is) I'm using mercurial as a VCS, so my idea is to somehow request the changed files between two revisions from it, iterate over the changed files, and upload each modified file, and delete each removed file. I can use hg log -vr rev1:rev2, and from the output, I can carve out the changed files with grep/sed/etc. Two problems: I have heard the horror stories that parsing the output of ls leads to insanity, so my guess is that the same applies to here, if I try to parse the output of hg log, the variables will undergo word-splitting, and all kinds of transformations. hg log doesn't tell me a file is modified/added/deleted. Differentiating between modified and deleted files would be the least. So, what would be the correct way to do this? I'm using yafc as an ftp client, in case it's needed, but willing to switch.

    Read the article

  • An algo for generating code callgraphs

    - by Shrey
    I am working on a project which requires generating some metrices of a code (it can be C/C++/Java/Python). One of the metrices can be that I create a callgraph after parsing the code entered (the programs are expected to be small - probably under 1000L). As of now, I am looking for a way to create a program (it can be C/Python) which can take as input a file (C/C++/Python/Java) and then create a textual output containing approximate calling sequence as well as tokens in the code file. As of now, I have looked at some other tools which do the same thing - like splint, pylint, codeviz etc. So, I have two ways of solving my problem: Read and understand the algorithm these tools use (tokenization-graph generation etc) Or, have a basic algo (something like very high level steps) and then sit down to create each of them as I want them to be. I know, re-inventing the wheel is not a good idea, but, I would still like to give option (2) a shot. Only issue is, currently I am a blank. My question: Does any one have any knowhow about how to create code graphs? Any hints as to what I should do? Any top levels steps which I can follow? Thanks a lot.

    Read the article

  • Python re module becomes 20 times slower when called on greater than 101 different regex

    - by Wiil
    My problem is about parsing log files and removing variable parts on each lines to be able to group them. For instance: s = re.sub(r'(?i)User [_0-9A-z]+ is ', r"User .. is ", s) s = re.sub(r'(?i)Message rejected because : (.*?) \(.+\)', r'Message rejected because : \1 (...)', s) I have about 120+ matching rules like those above. I have found no performances issues while searching successively on 100 different regex. But a huge slow down comes when applying 101 regex. Exact same behavior happens when replacing my rules set by for a in range(100): s = re.sub(r'(?i)caught here'+str(a)+':.+', r'( ... )', s) Got 20 times slower when putting range(101) instead. # range(100) % ./dashlog.py file.bz2 == Took 2.1 seconds. == # range(101) % ./dashlog.py file.bz2 == Took 47.6 seconds. == Why such thing is happening ? And is there any known workaround ? (Happens on Python 2.6.6/2.7.2 on Linux/Windows.)

    Read the article

  • iPhone -trouble with a loading data from webservice into a tableview

    - by medampudi
    I am using a Window based application and then loading up my initial navigationview based controller. After loading it if the user is not registered/ does not have a credentials present then it takes the user to a login view controller . loginViewController *sampleView = [[loginViewController alloc] initWithNibName:@"loginViewController" bundle:nil]; [self.navigationController presentModalViewController:sampleView animated:YES]; [sampleView release]; then right after that i try to load the table with data that i get from a webservice using asiHTTP .. for this question lets say it takes 3 seconds time to get the data and then deserialize it . now... my question is it works out okey in the later runs as I store the username and password in a seure location... but in the first instance.... i am not able to get the data to laod to the tableview... I have tried a lot of things... 1. Initially the data fetch methods was in a diffrent methods.. so i thought that might be the problem as then moved it the same place as the tbleviewController(navigationController) 2. I event put in the Reload data at the end of the functionality for the data parsing and deserialization... nothing happens. 3. i did not understand the concept of @property and alll.... 4. The screen is black screen with nothing displayed on it for a good 5 seconds in the consecutive launches of the app.... so could we have something like a MBPorgressHUD implemented for the same. could any one please help for these scenarios and guidance as to what paths to take from here...

    Read the article

  • Can this code cause a memory leak (Arduino)

    - by tbraun89
    I have a arduino project and I created this struct: struct Project { boolean status; String name; struct Project* nextProject; }; In my application I parse some data and create Project objects. To have them in a list there is a pointer to the nextProject in each Project object expect the last. This is the code where I add new projects: void RssParser::addProject(boolean tempProjectStatus, String tempData) { if (!startProject) { startProject = true; firstProject.status = tempProjectStatus; firstProject.name = tempData; firstProject.nextProject = NULL; ptrToLastProject = &firstProject; } else { ptrToLastProject->nextProject = new Project(); ptrToLastProject->nextProject->status = tempProjectStatus; ptrToLastProject->nextProject->name = tempData; ptrToLastProject->nextProject->nextProject = NULL; ptrToLastProject = ptrToLastProject->nextProject; } } firstProject is an private instance variable and defined in the header file like this: Project firstProject; So if there actually no project was added, I use firstProject, to add a new one, if firstProject is set I use the nextProject pointer. Also I have a reset() method that deletes the pointer to the projects: void RssParser::reset() { delete ptrToLastProject; delete firstProject.nextProject; startProject = false; } After each parsing run I call reset() the problem is that the memory used is not released. If I comment out the addProject method there are no issues with my memory. Someone can tell me what could cause the memory leak?

    Read the article

  • Ways to implement tags - pros and cons of each

    - by bobobobo
    Related Using SO as an example, what is the most sensible way to manage tags if you anticipate they will change often? Way 1: Seriously denormalized (comma delimited) table posts +--------+-----------------+ | postId | tags | +--------+-----------------+ | 1 | c++,search,code | Here tags are comma delimited. Pros: Tags are retrieved at once with a single select query. Updating tags is simple. Easy and cheap to update. Cons: Extra parsing on tag retrieval, difficult to count how many posts use which tags. (alternatively, if limited to something like 5 tags) table posts +--------+-------+-------+-------+-------+-------+ | postId | tag_1 | tag_2 | tag_3 | tag_4 | tag_5 | +--------+-------+-------+-------+-------+-------+ | 1 | c++ |search | code | | | Way 2: "Slightly normalized" (separate table, no intersection) table posts +--------+-------------------+ | postId | title | +--------+-------------------+ | 1 | How do u tag? | table taggings +--------+---------+ | postId | tagName | +--------+---------+ | 1 | C++ | | 1 | search | Pros: Easy to see tag counts (count(*) from taggings where tagName='C++'). Cons: tagName will likely be repeated many, many times. Way 3: The cool kid's (normalized with intersection table) table posts +--------+---------------------------------------+ | postId | title | +--------+---------------------------------------+ | 1 | Why is a raven like a writing desk? | table tags +--------+---------+ | tagId | tagName | +--------+---------+ | 1 | C++ | | 2 | search | | 3 | foofle | table taggings +--------+---------+ | postId | tagId | +--------+---------+ | 1 | 1 | | 1 | 2 | | 1 | 3 | Pros: No repeating tag names. More girls will like you. Cons: More expensive to change tags than way #1.

    Read the article

  • How to make command-line options mandatory with GLib?

    - by ahe
    I use GLib to parse some command-line options. The problem is that I want to make two of those options mandatory so that the program terminates with the help screen if the user omits them. My code looks like this: static gint line = -1; static gint column = -1; static GOptionEntry options[] = { {"line", 'l', 0, G_OPTION_ARG_INT, &line, "The line", "L"}, {"column", 'c', 0, G_OPTION_ARG_INT, &column, "The column", "C"}, {NULL} }; ... int main(int argc, char** argv) { GError *error = NULL; GOptionContext *context; context = g_option_context_new ("- test"); g_option_context_add_main_entries (context, options, NULL); if (!g_option_context_parse(context, &argc, &argv, &error)) { usage(error->message, context); } ... return 0; } If I omit one of those parameters or both on the command-line g_option_context_parse() still succeeds and the values in question (line and or column) are still -1. How can I tell GLib to fail parsing if the user doesn't pass both options on the command-line? Maybe I'm just blind but I couldn't find a flag I can put into my GOptionEntry data structure to tell it to make those fields mandatory. Of course I could check if one of those variables is still -1 but then the user could just have passed this value on the command-line and I want to print a separate error message if the values are out of range.

    Read the article

  • Testing ActionMailer's receive method (Rails)

    - by Brian Armstrong
    There is good documentation out there on testing ActionMailer send methods which deliver mail. But I'm unable to figure out how to test a receive method that is used to parse incoming mail. I want to do something like this: require 'test_helper' class ReceiverTest < ActionMailer::TestCase test "parse incoming mail" do email = TMail::Mail.parse(File.open("test/fixtures/emails/example1.txt",'r').read) assert_difference "ProcessedMail.count" do Receiver.receive email end end end But I get the following error on the line which calls Receiver.receive NoMethodError: undefined method `index' for #<TMail::Mail:0x102c4a6f0> /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/stringio.rb:128:in `gets' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/mail.rb:392:in `parse_header' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/mail.rb:139:in `initialize' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/stringio.rb:43:in `open' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/port.rb:340:in `ropen' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/mail.rb:138:in `initialize' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/mail.rb:123:in `new' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/mail.rb:123:in `parse' /Library/Ruby/Gems/1.8/gems/actionmailer-2.3.4/lib/action_mailer/base.rb:417:in `receive' Tmail is parsing the test file I have correctly. So that's not it. Thanks!

    Read the article

  • Javascript Recursion

    - by rpophessagr
    I have an ajax call and would like to recall it once I finish parsing and animating the result into the page. And that's where I'm getting stuck. I was able to recall the function, but it seems to not take into account the delays in the animation. i.e. The console keeps outputting the values at a wild pace. I thought setInterval might help with the interval being the sum of the length of my delays, but I can't get that to work... function loadEm(){ var result=new Array(); $.getJSON("jsonCall.php",function(results){ $.each(results, function(i, res){ rand = (Math.floor(Math.random()*11)*1000)+2000; fullRand += rand; console.log(fullRand); $("tr:first").delay(rand).queue(function(next) { doStuff(res); next(); }); }); var int=self.setInterval("loadEm()",fullRand); }); } });

    Read the article

  • what is the best way to optimize my json on an asp.net-mvc site

    - by ooo
    i am currently using jqgrid on an asp.net mvc site and we have a pretty slow network (internal application) and it seems to be taking the grid a long time to load (the issue is both network as well as parsing, rendering) I am trying to determine how to minimized what i send over to the client to make it as fast as possible. Here is a simplified view of my controller action to load data into the grid: [AcceptVerbs(HttpVerbs.Get)] public ActionResult GridData1(GridData args) { var paginatedData = applications.GridPaginate(args.page ?? 1, args.rows ?? 10, i => new { i.Id, Name = "<div class='showDescription' id= '" + i.id+ "'>" + i.Name + "</div>", MyValue = GetImageUrl(_map, i.value, "star"), ExternalId = string.Format("<a href=\"{0}\" target=\"_blank\">{1}</a>", Url.Action("Link", "Order", new { id = i.id }), i.Id), i.Target, i.Owner, EndDate = i.EndDate, Updated = "<div class='showView' aitId= '" + i.AitId + "'>" + GetImage(i.EndDateColumn, "star") + "</div>", }) return Json(paginatedData); } So i am building up a json data (i have about 200 records of the above) and sending it back to the GUI to put in the jqgrid. The one thing i can thihk of is Repeated data. In some of the json fields i am appending HTML on top of the raw "data". This is the same HTML on every record. It seems like it would be more efficient if i could just send the data and "append" the HTML around it on the client side. Is this possible? Then i would just be sending the actual data over the wire and have the client side add on the rest of the HTML tags (the divs, etc) be put together. Also, if there are any other suggestions on how i can minimize the size of my messages, that would be great. I guess at some point these solution will increase the client side load but it may be worth it to cut down on network traffic.

    Read the article

  • WN server filter won't work

    - by Mike Fink
    WN servers have an alternative to cgi programs called filters. I have been trying to get one to work, but I have had no luck. I am writing in python. It looks like the server is not receiving any output from the program but is parsing nothing and wrapping this nothing in my standard header and footer. I have chmod 755 the program and my index.wn file reads: Default-Attributes=parse Default-Wrappers=templates/template1.inc File=includeTests.html File=index.html File=archives.html File=contact.html File=style.css File=testProgram.py #here is the stuff about the filter File=testFilter.html Content-type=text/html Filter=testProgram.py Attributes=parse, cgi here is what is in the filter called testProgram.py: #!/usr/bin/python print "Content-Type: text/html\n\n" print "hi" testProgram.py works perfectly if it is shoved into a cgi-bin folder and chmoded. I suppose my problem may lay with the fact that I have never ever seen a filter program in python. I'm not sure I have even seen a filter program at all. Does anyone out there have any experience with wn servers and filters? Any ideas?

    Read the article

  • Short snippet summarizing a webpage?

    - by Legend
    Is there a clean way of grabbing the first few lines of a given link that summarizes that link? I have seen this being done in some online bookmarking applications but have no clue on how they were implemented. For instance, if I give this link, I should be able to get a summary which is roughly like: I'll admit it, I was intimidated by MapReduce. I'd tried to read explanations of it, but even the wonderful Joel Spolsky left me scratching my head. So I plowed ahead trying to build decent pipelines to process massive amounts of data Nothing complex at first sight but grabbing these is the challenging part. Just the first few lines of the actual post should be fine. Should I just use a raw approach of grabbing the entire html and parsing the meta tags or something fancy like that (which obviously and unfortunately is not generalizable to every link out there) or is there a smarter way to achieve this? Any suggestions? Update: I just found InstaPaper do this but am not sure if it is getting the information from RSS feeds or some other way.

    Read the article

  • Pass arguments to a parameter class object

    - by David R
    This is undoubtedly a simple question. I used to do this before, but it's been around 10 years since I worked in C++ so I can't remember properly and I can't get a simple constructor call working. The idea is that instead of parsing the args in main, main would create an object specifically designed to parse the arguments and return them as required. So: Parameters params = new Parameters(argc, argv) then I can call things like params.getfile() Only problem is I'm getting a complier error in Visual Studio 2008 and I'm sure this is simple, but I think my mind is just too rusty. What I've got so far is really basic: In the main: #include "stdafx.h" #include "Parameters.h" int _tmain(int argc, _TCHAR* argv[]) { Parameters params = new Parameters(argc, argv); return 0; } Then in the Parameters header: #pragma once class Parameters { public: Parameters(int, _TCHAR*[]); ~Parameters(void); }; Finally in the Parameters class: include "Stdafx.h" #include "Parameters.h" Parameters::Parameters(int argc, _TCHAR* argv[]) { } Parameters::~Parameters(void) { } I would appreciate if anyone could see where my ageing mind has missed the really obvious. Thanks in advance.

    Read the article

  • How might a C# programmer approach writing a solution in javascript?

    - by Ben McCormack
    UPDATE: Perhaps this wasn't clear from my original post, but I'm mainly interested in knowing a best practice for how to structure javascript code while building a solution, not simply learning how to use APIs (though that is certainly important). I need to add functionality to a web site and our team has decided to approach the solution using a web service that receives a call from a JSON-formatted AJAX request from within the web site. The web service has been created and works great. Now I have been tasked with writing the javascript/html side of the solution. If I were solving this problem in C#, I would create separate classes for formatting the request, handling the AJAX request/response, parsing the response, and finally inserting the response somehow into the DOM. I would build properties and methods appropriately into each class, doing my best to separate functionality and structure where appropriate. However, I have to solve this problem in javascript. Firstly, how could I approach my solution in javascript in the way I would approach it from C# as described above? Or more importantly, what's a better way to approach structuring code in javascript? Any advice or links to helpful material on the web would be greatly appreciated. NOTE: Though perhaps not immediately relevant to this question, it may be worth noting that we will be using jQuery in our solution.

    Read the article

  • What's the best way to get a bunch of rows from MySQL if you have an array of integer primary keys?

    - by Evan P.
    I have a MySQL table with an auto-incremented integer primary key. I want to get a bunch of rows from the table based on an array of integers I have in memory in my program. The array ranges from a handful to about 1000 items. What's the most efficient query syntax to get the rows? I can think of a few: "SELECT * FROM thetable WHERE id IN (1, 2, 3, 4, 5)" (this is what I do now) "SELECT * FROM thetable where id = 1 OR id = 2 OR id = 3" Multiple queries of the form "SELECT * FROM thetable WHERE id = 1". Probably the most friendly to the query cache, but expensive due to having lots of query parsing. A union, like "SELECT * FROM thetable WHERE id = 1 UNION SELECT * FROM thetable WHERE id = 2 ..." I'm not sure if MySQL caches the results of each query; it's also the most verbose format. I think using the NoSQL interface in MySQL 5.6+ would be the most efficient way to do this, but I'm not yet up to MySQL 5.6.

    Read the article

  • Check a list of packages to install with apt-get

    - by Joel
    I am writing a post-install script for Ubuntu in Perl (same script as seen here). One of the steps is to install a list of packages. The problem is that if apt-get install fails in some of many different ways for any one of the packages the script dies badly. I would like to prevent that from happening. This happens because of the ways that apt-get install fails for packages that it doesn't like. For example when I try to install a nonsense word (i.e. typed in the wrong package name) $ sudo apt-get install oblihbyvl Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package oblihbyvl but if instead the package name has been obsoleted (installing handbrake from ppa) $ sudo apt-get install handbrake Reading package lists... Done Building dependency tree Reading state information... Done Package handbrake is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'handbrake' has no installation candidate $ apt-cache search handbrake handbrake-cli - versatile DVD ripper and video transcoder - command line handbrake-gtk - versatile DVD ripper and video transcoder - GTK GUI I have tried parsing the results of apt-cache and apt-get -s install to try to catch all possibilities before doing the install, but I seem to keep finding new ways to allow failures to continue to the actual install system command. My question is, is there some facility either in Perl (e.g. a module, though I would like to avoid installing modules if possible as this is supposed to be the first thing run after a new install of Ubuntu) or apt-* or dpkg that would let me be sure that the packages are all available to be installed before installing and if not fail gracefully in some way that lets the user decide what to do?

    Read the article

  • Faster Insertion of Records into a Table with SQLAlchemy

    - by Kyle Brandt
    I am parsing a log and inserting it into either MySQL or SQLite using SQLAlchemy and Python. Right now I open a connection to the DB, and as I loop over each line, I insert it after it is parsed (This is just one big table right now, not very experienced with SQL). I then close the connection when the loop is done. The summarized code is: log_table = schema.Table('log_table', metadata, schema.Column('id', types.Integer, primary_key=True), schema.Column('time', types.DateTime), schema.Column('ip', types.String(length=15)) .... engine = create_engine(...) metadata.bind = engine connection = engine.connect() .... for line in file_to_parse: m = line_regex.match(line) if m: fields = m.groupdict() pythonified = pythoninfy_log(fields) #Turn them into ints, datatimes, etc if use_sql: ins = log_table.insert(values=pythonified) connection.execute(ins) parsed += 1 My two questions are: Is there a way to speed up the inserts within this basic framework? Maybe have a Queue of inserts and some insertion threads, some sort of bulk inserts, etc? When I used MySQL, for about ~1.2 million records the insert time was 15 minutes. With SQLite, the insert time was a little over an hour. Does that time difference between the db engines seem about right, or does it mean I am doing something very wrong?

    Read the article

  • Releasing Autopool crashes on iOS 4.0 (and only on 4.0)

    - by samsam
    Hi there. I'm wondering what could cause this. I have several methods in my code that i call using performSelectorInBackground. Within each of these methods i have an Autoreleasepool that is being alloced/initialized at the beginning and released at the end of the method. this perfectly works on iOS 3.1.3 / 3.2 / 4.2 / 4.2.1 but it fataly crashes on iOS 4.0 with a EXC_BAD_ACCESS Exception that happens after calling [myPool release]. After I noticed this strange behaviour I was thinking about rewriting portions of my code and to make my app "less parallel" in case that the client os is 4.0. After I did that, the next point where the app crashed was within the ReachabilityCallback-Method from Apples Reachability "Framework". well, now I'm not quite sure what to do. The things i do within my threaded methods is pretty simple xml parsing (no cocoa calls or stuff that would affect the UI). After each method finishes it posts a notification which the coordinating-thread listens to and once all the parallelized methods have finished, the coordinating thread calls viewcontrollers etc... I have absolutely no clue what could cause this weird behaviour. Especially because Apples Code fails as well. any help is greatly appreciated! thanks, sam

    Read the article

  • Modifying DOM of a webpage

    - by Prashant Singh
    The structure of a webpage is like this :- <div id='abc'> <div class='a'>Some contents here </div> <div class='b'>Some other contents< </div> </div> My aim is to add this after the class a in above structure. <div class='a'>Some other contents here </div> So that final structure looks like this :- <div id='abc'> <div class='a'>Some contents here </div> <div class='a'>Some other contents here </div> <div class='b'>Some other contents< </div> </div> Can there be a better way to do this using DOM properties. I was thinking of naive way of parsing the content and updating. Please comment if I am unclear in asking my doubt !

    Read the article

  • Node.js/Express Partials problem: Can't be nested too deep?

    - by heorling
    I'm learning Node.js, Express, haml.js and liking it. I've run into a prety annoying problem though. I'm pretty new to this but have been getting nice results so far. I'm writing a jquery heavy web app that relies on a table containing divs. The divs slide around, switch back and fourth and are resized etc to my hearts content. What I'm looking for a way to switch (template?) the divs. Since I've been building in express and mimicking the chat example it would make sense to use partials. The rub is that I've been using inexplicit divs in haml, held within a td. The divs are cunstructed as follows: %tr %td .class1.class2.class3.classetc Which has worked fine cross browser. Parsing the classes works great for the js code to pass arguments around, fetch values etc. What I'd like to be able to do is something like: %tr %td .class1.class2.class3.classetc %ul#messages != this.partial('message.html.haml', { collection: messages }) Any combination I've tried with this has failed however. And I might have tried them all. If I could put a partial into that div I'd probably be set. And you can nest them as long as you use #ids instead of .classes. But if you use more than one class it breaks! I think that's the most accurate way of summing it up. How do you do this? I've checked out various templating solutions like mu.js and micro template like by John Resig. I earlier checked out this thread on templating engines. It's very possible I'm making some fundamental mistake here, I'm new to this. What's a good way to do this?

    Read the article

  • Binary serialization and deserialization without creating files (via strings)

    - by the_V
    Hi, I'm trying to create a class that will contain functions for serializing/deserializing objects to/from string. That's what it looks like now: public class BinarySerialization { public static string SerializeObject(object o) { string result = ""; if ((o.GetType().Attributes & TypeAttributes.Serializable) == TypeAttributes.Serializable) { BinaryFormatter f = new BinaryFormatter(); using (MemoryStream str = new MemoryStream()) { f.Serialize(str, o); str.Position = 0; StreamReader reader = new StreamReader(str); result = reader.ReadToEnd(); } } return result; } public static object DeserializeObject(string str) { object result = null; byte[] bytes = System.Text.Encoding.ASCII.GetBytes(str); using (MemoryStream stream = new MemoryStream(bytes)) { BinaryFormatter bf = new BinaryFormatter(); result = bf.Deserialize(stream); } return result; } } SerializeObject method works well, but DeserializeObject does not. I always get an exception with message "End of Stream encountered before parsing was completed". What may be wrong here?

    Read the article

  • Seperating and counting CSV entries from database (Access/ASp Classic)

    - by Katherine Perotta
    hey i could really use some help with this one. I have a faq with multiple "tags" and I would like to separate and count them. They are currently in the database as follows: ID-----------------TITLE--------------CONTENT-----------TAGS Sample Records: 1---------------sampletitle 1---------amplecontent--------tag1,tag2,tag3 2---------------sampletitle 2---------moresamplestuff-----tag3,tag4,tag5 How could I go about counting the number of times each tag is used? In the end, would it be easier to just create a separate table called TAGS, with a single tag corresponding to a single ID in FAQ? The only reason I don't prefer doing something like that is because I have so much data already it would take quite a while. However, if there's no alternative or if its easier than doing string parsing like that, im willing to do it. The goal is to display each unique tag and the number of times it is used. Would it be better to do the heavy lifting in the database or ASP? I have gotten as far as getting a list of all tags and displaying them in an array (with each tag separated). So at this point what I need to do is count each value and then remove the duplicates (while preserving the count number somewhere). This is in ASP classic using an Access database. Thanks!

    Read the article

  • How can I handle arbitrary text as "nouns" in Inform 7?

    - by Beska
    In Inform, I'd like to be able to create a new action, and have it be able to work on aribitrary text. I can easily create a new action that will work on existing things. Finding is an action with past participle found, applying to one thing. Understand "Find [something]" as finding. Carry out finding: say "You find [the noun]." But this only works on items that exist within the game world. If I try to "find fdsljk", for instance, it will fail because I haven't created a "fdsljk". I'd like to be able to "find fdsljk" and then be able to grab that extra text and respond with it...something like "You find the fdsljk." I was thinking that something like A foo is a kind of value. Finding is an action with past participle found, applying to one foo. Understand "Find [something]" as finding. Carry out finding: say "You find [the foo]." might be close...but it doesn't work. I get an error that reads: You wrote 'say "You find [the foo]."' , and in particular 'the foo': but this asked to say something of a kind which can't be said, or rather, printed. Although this problem can arise when you use complicated text substitutions which come in variant forms depending on the kinds of value used, far more often what this means is just that you tried to use a substituted value (e.g., in 'say "The dial reads [V]."') of a kind which could not be printed out. For instance, if V is a number or a piece of text, there is no problem: but if V is a parsing topic, say an entry in a 'topic' column of a table, then this problem will arise. The italics are mine, and highlight the key...I think this should be doable, but I'm taking the wrong path. Clues?

    Read the article

  • Which Perl moudle can handle variety of date formats with unicode characters ?

    - by ram
    My requirement is parsing xml files which contains wide varieties of timestamps based on the locales at which they are written. They may contain Unicode characters in case of Chinese or Korean locales. I have to parse these timestamps and put then in a standard format something like 2009-11-26 12:40:54 to put them in a oracle database. Sometimes I may not even know the locale and yet I have to parse the timestamps. I am looking for a module that automatically detects the timestamp format (including unicode characters for am and pm in their local language) and converts in to epoch time so that I can convert it back to what ever way I like to. I have gone through similar questions in this forum. Few suggested DateFormat module, and Date::Parse module. The perl distribution I am using is 5.10 so Date::Manip doesn't come as a core module. As I am supposed to use just the basic core modules and few CPAN modules(on request I cannot ask for all), I request you to kindly suggest me a good module that suffices all my requirements. Thanks in advance

    Read the article

  • Tell bots apart from human visitors for stats?

    - by Pekka
    I am looking to roll my own simple web stats script. The only major obstacle on the road, as far as I can see, is telling human visitors apart from bots. I would like to have a solution for that which I don't need to maintain on a regular basis (i.e. I don't want to update text files with bot-related User-agents). Is there any open service that does that, like Akismet does for spam? Or is there a PHP project that is dedicated to recognizing spiders and bots and provides frequent updates? To clarify: I'm not looking to block bots. I do not need 100% watertight results. I just want to exclude as many as I can from my stats. In know that parsing the user-Agent is an option but maintaining the patterns to parse for is a lot of work. My question is whether there is any project or service that does that already. Bounty: I thought I'd push this as a reference question on the topic. The best / most original / most technically viable contribution will receive the bounty amount.

    Read the article

< Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >