Search Results

Search found 694 results on 28 pages for 'blob'.

Page 23/28 | < Previous Page | 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • structured vs. unstructured data in db

    - by Igor
    the question is one of design. i'm gathering a big chunk of performance data with lots of key-value pairs. pretty much everything in /proc/cpuinfo, /proc/meminfo/, /proc/loadavg, plus a bunch of other stuff, from several hundred hosts. right now, i just need to display the latest chunk of data in my UI. i will probably end up doing some analysis of the data gathered to figure out performance problems down the road, but this is a new application so i'm not sure what exactly i'm looking for performance-wise just yet. i could structure the data in the db -- have a column for each key i'm gathering. the table would end up being O(100) columns wide, it would be a pain to put into the db, i would have to add new columns if i start gathering a new stat. but it would be easy to sort/analyze the data just using SQL. or i could just dump my unstructured data blob into the table. maybe three columns -- host id, timestamp, and a serialized version of my array, probably using JSON in a TEXT field. which should I do? am i going to be sorry if i go with the unstructured approach? when doing analysis, should i just convert the fields i'm interested in and create a new, more structured table? what are the trade-offs i'm missing here?

    Read the article

  • Finding Common Phrases in SQL Server TEXT Column

    - by regex
    Short Desc: I'm curious to see if I can use SQL Analysis services or some other SQL Server service to mine some data for me that will show commonalities between SQL TEXT fields in a dataset. Long Desc I am looking at a subset of data that consists of about 10,000 rows of TEXT blobs which are used as a notes column in a issue tracking (ticketing) software. I would like to use something out of the box (without having to build something) that might be able to parse through all of the rows and find commonly used byte sequences in the "Notes" column. In other words, I want to find commonly used phrases (two to three word phrases, so 9 - 20 character sections of the TEXT blob). This will help me better determine if associate's notes contain similar phrases (troubleshooting techniques) that we could standardize in our troubleshooting process flow. Closing Note I'd really rather not build an application to do this as my method will probably not be the most efficient way to do it. Hopefully all this makes sense. Please let me know in the comments if anything needs clarification.

    Read the article

  • accepts_nested_attributes_for and nested_form plugin

    - by Denis
    Hi folks, I've the following code in a _form.html.haml partial, it's used for new and edit actions. (fyi I use the Ryan Bates' plugin nested_form) .fields - f.fields_for :transportations do |builder| = builder.collection_select :person_id, @people, :id, :name, {:multiple => true} = builder.link_to_remove 'effacer' = f.link_to_add "ajouter", :transportations works fine for the new action... for the edit action, as explain in the doc, I've to add the :id of already existing associations, so, I've to add something like = builder.hidden_field :id, ?the value? if ?.new_record? How can I get the value? Here is the doc of accepts_nested_attributes_for for reference (source: http://github.com/rails/rails/blob/master/activerecord/lib/active_record/nested_attributes.rb#L332) # Assigns the given attributes to the collection association. # # Hashes with an <tt>:id</tt> value matching an existing associated record # will update that record. Hashes without an <tt>:id</tt> value will build # a new record for the association. Hashes with a matching <tt>:id</tt> # value and a <tt>:_destroy</tt> key set to a truthy value will mark the # matched record for destruction. # # For example: # # assign_nested_attributes_for_collection_association(:people, { # '1' => { :id => '1', :name => 'Peter' }, # '2' => { :name => 'John' }, # '3' => { :id => '2', :_destroy => true } # }) # # Will update the name of the Person with ID 1, build a new associated # person with the name `John', and mark the associatied Person with ID 2 # for destruction. # # Also accepts an Array of attribute hashes: # # assign_nested_attributes_for_collection_association(:people, [ # { :id => '1', :name => 'Peter' }, # { :name => 'John' }, # { :id => '2', :_destroy => true } # ]) Thanks for your help.

    Read the article

  • ASP.NET MVC solution to a forms application?

    - by Gloria Huang
    Hello, We're building a survey system and utilising ASP.NET MVC and wondered if anyone can offer suggestions on the architecture. Here's the problem we're trying to solve. Essentially an agency sends out several surveys every year. They're very structured and not like SurveyMonkey style of surveys - they're actually applications of feedback. Much like a Visa Application there are lots of things they need to do and sometimes it takes them 2-3 weeks to fill it out. They can upload files (proofs of purchase etc - PDF/JPG) and also multiple "items". Eg. Say for instance they've worked for McDonalds, there could be 20 different franchises, they build a list of locations they've worked. 3 weeks later there could be another 3 new locations and 2 may have closed down. So we need to ensure the forms are able to handle those situations. The forms themselves (markup and data) change every year - I should mention that this for a taxation/finance/budget system. We were thinking of using MVC, using Xml to store the data (temporarily), XSD to validate the data, XSL to transform the data to presentable markup (for them to fill out) and then once they "Submit" an application it gets stored into the DB in relevant areas. When the user starts the application process, they can save the progress so far (we validate whatever they entred and ignore any they havent), save it as an Xml blob and store in the DB. When they're finally ready to submit it, then we do a full validation and upload the files and store them securely (it has their business proofs and accounting statements) and then run some workflows. What I'm really concerned about is how to manage changing forms versions (a year later). How are form/application systems written these days? We have 2 months to pull this off and about 30 forms to deliver. So 30xXML, 30xXSD, 30xXSL.

    Read the article

  • Delphi 2009 dbExpress and Interbase: Unicode migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally. Update: one problem I found now is that there are two different persistent field types for Unicode and non Unicode character fields. For the existing database, dbExpress creates TStringField objects. For the Unicode database fields, dbExpress creates (or expects!) TWideStringField objects. This looks like a lot of work lies ahead. While we could try to avoid persistent fields (and add calculated fields at run time), Of course we would prefer a solution which does not require so many changes in existing units and DFM files.

    Read the article

  • Should I move big data blobs in JSON or in separate binary connection?

    - by Amagrammer
    QUESTION: Is it better to send large data blobs in JSON for simplicity, or send them as binary data over a separate connection? If the former, can you offer tips on how to optimize the JSON to minimize size? If the latter, is it worth it to logically connect the JSON data to the binary data using an identifier that appears in both, e.g., as "data" : "< unique identifier " in the JSON and with the first bytes of the data blob being < unique identifier ? CONTEXT: My iPhone application needs to receive JSON data over the 3G network. This means that I need to think seriously about efficiency of data transfer, as well as the load on the CPU. Most of the data transfers will be relatively small packets of text data for which JSON is a natural format and for which there is no point in worrying much about efficiency. However, some of the most critical transfers will be big blobs of binary data -- definitely at least 100 kilobytes of data, and possibly closer to 1 megabyte as customers accumulate a longer history with the product. (Note: I will be caching what I can on the iPhone itself, but the data still has to be transferred at least once.) It is NOT streaming data. I will probably use a third-party JSON SDK -- the one I am using during development is here. Thanks

    Read the article

  • MySQL ORDER BY DESC is fast but ASC is very slow

    - by Pepper
    Hello, I'm completely stumped on this one. For some reason when I sort this query by DESC it's super fast, but if sorted by ASC it's extremely slow. This takes about 150 milliseconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published DESC LIMIT 0, 50; This takes about 32 seconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published ASC LIMIT 0, 50; The EXPLAIN is the same for both queries. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts index NULL published 5 NULL 50 Using where I've tracked it down to "USE INDEX (published)". If I take that out it's the same performance both ways. But the EXPLAIN shows the query is less efficient overall. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts range feed_id feed_id 4 \N 759 Using where; Using filesort And here's the table. CREATE TABLE `posts` ( `id` int(20) NOT NULL AUTO_INCREMENT, `feed_id` int(11) NOT NULL, `post_url` varchar(255) NOT NULL, `title` varchar(255) NOT NULL, `content` blob, `author` varchar(255) DEFAULT NULL, `published` int(12) DEFAULT NULL, `updated` datetime NOT NULL, `created` datetime NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `post_url` (`post_url`,`feed_id`), KEY `feed_id` (`feed_id`), KEY `published` (`published`) ) ENGINE=InnoDB AUTO_INCREMENT=196530 DEFAULT CHARSET=latin1; Is there a fix for this? Thanks!

    Read the article

  • Delphi dbExpress and Interbase: UTF8 migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally.

    Read the article

  • Find actual value of PHP variable

    - by Simon S
    Hi all. I am having a real headache with reading in a tab delimited text file and inserting it into a MySQL Database. The tab delimited text file was generated (I think) from a MS SQL Database, and I have written a simple script to read in the file and insert it into an existing table in my MySQL database. However, there seems to be some problem with the data in the txt file. When my PHP script parses the file and I output the INSERT statements, the values in each of the fields are longer than they should be. For example, the first field should be a simple two character alphanumeric value. If I echo out the INSERT statements, using Firebug (in Firefox), between each of the characters is a question mark in a black diamond. If I var_dump the values, I get the following: string(5) "A1" Now, this clearly shows a two character string, but var_dump tells me it is five characters long!! If I trim() the value, all I get is the first character (in this case "A"). How can I get at the other characters, even if it is only to remove them? Additionally, this appears to be forcing MySQL to insert the value as a BLOB, not as a varchar as it should. Simon

    Read the article

  • Show cue banner for wpf ComboBox with grouping

    - by Adam Duston
    I have a ComboBox in my WPF form: <ComboBox Margin="75,0,15,102" Name="videoFormatCombo" Height="23" VerticalAlignment="Bottom" DataContext="{StaticResource GroupedVideoFormats}" ItemsSource="{Binding}" ItemTemplate="{StaticResource VideoFormatTemplate}"> <ComboBox.GroupStyle> <GroupStyle HeaderTemplate="{StaticResource GroupHeader}"/> </ComboBox.GroupStyle> </ComboBox> As you might be able to guess, GroupedVideoFormats is a CollectionViewSource with grouping. I need to get a cue banner to display for this ComboBox. I've attempted the solution that is (very verbosely) outlined in this blog post, but it will not work for a ComboBox with grouped data. The two solutions outlined in superfluousprefixhttp://stackoverflow.com/questions/2548757/how-can-the-blank-space-in-a-c-combobox-be-filled-as-a-hint-for-the-user are for Windows Forms ComboBoxes only, and won't work with WPF. If it would help to see all the original source, this particular form is on github: superfluousprefixhttp://github.com/8planes/mirovideoconverter/blob/master/MSWindows/Windows/FileSelect.xaml . It's an open-source project, so the entire project is on github: superfluousprefixhttp://github.com/8planes/mirovideoconverter/tree/master/MSWindows . Thank you for any advice! Adam P.S. stackoverflow wouldn't let me make more than one anchor tag in my post, hence the long urls with the superfluous prefix. Sorry!

    Read the article

  • php & mySQL: Storing doc, xls, zip, etc. with limited access and archiving

    - by Devner
    Hi all, In my application, I have a provision for users to upload files like doc, xls, zip, etc. I would like to know how to store these files on my website and have only restricted people access it. I may have a group of people and let only these group access those uploaded files. I know that some may try to just copy the link to the document or the file and pass it to another (non-permitted) user and they can download it. So how can I prevent it? How can I check if the request to download the file was made by a legitimate user who has access to the file? The usernames of the group members are stored in the database along with the document name and location in the database so they can access it. But how do I prevent non-permitted users from being able to access that confidential data in all ways? With the above in mind, how do I store these documents? Do I store the documents in a blob column in the Database or just just let user upload to a folder and merely store the path to the file in the database? The security of the documents is of utmost importance. So any procedure that could facilitate this feature would definitely help. I am not into Object Oriented programming so if you have a simpler code that you would like to share with me, I would greatly appreciate it. Also how do I archive documents that are old? Like say there are documents that are 1 year old and I want to conserve my website space by archiving them but still make them available to the user when they need it. How do I go about this? Thank you.

    Read the article

  • Hadoop Map/Reduce - simple use example to do the following...

    - by alexeypro
    I have MySQL database, where I store the following BLOB (which contains JSON object) and ID (for this JSON object). JSON object contains a lot of different information. Say, "city:Los Angeles" and "state:California". There are about 500k of such records for now, but they are growing. And each JSON object is quite big. My goal is to do searches (real-time) in MySQL database. Say, I want to search for all JSON objects which have "state" to "California" and "city" to "San Francisco". I want to utilize Hadoop for the task. My idea is that there will be "job", which takes chunks of, say, 100 records (rows) from MySQL, verifies them according to the given search criteria, returns those (ID's) which qualify. Pros/cons? I understand that one might think that I should utilize simple SQL power for that, but the thing is that JSON object structure is pretty "heavy", if I put it as SQL schemas, there will be at least 3-5 tables joins, which (I tried, really) creates quite a headache, and building all the right indexes eats RAM faster than I one can think. ;-) And even then, every SQL query has to be analyzed to be utilizing the indexes, otherwise with full scan it literally is a pain. And with such structure we have the only way "up" is just with vertical scaling. But I am not sure it's the best option for me, as I see how JSON objects will grow (the data structure), and I see that the number of them will grow too. :-) Help? Can somebody point me to simple examples of how this can be done? Does it make sense at all? Am I missing something important? Thank you.

    Read the article

  • ASP.NET NamingContainer naming convention

    - by EOLeary
    The Background Hello! I'm working on a project in which the client has required a lot of things to happen on a single page, and this has resulted in a rather large blob of HTML being rendered out to the client browser. The main issue is with input tags (where runat="server" attribute is set), these tend to cause a drastic increase in markup size due to validation, updatepanel triggers, viewstate, and the control markup itself. I've done what I can to reduce the amount of triggers I'm using, I'm compressing the viewstate (to something like 8% of the original viewstate size), I've gotten rid of a lot of ASP.NET Validators and rolled my own, and and I've been using ClientIdMode to reduce the length of the ID attributes of many asp.net elements. All of these combined significantly reduces the amount of HTML being sent to the client, (for example going from 2 megabytes for a request down to 500-600 kb - these are HUGE pages, mind you). The Issue One area which I've been having trouble reducing is simply the auto-generated 'name' attribute of input elements. <input name="ctl00$ctl00$ctl00$_main$_main$_bodyMatterPhase$_phaseTree$ctl00$_taskTree$ctl00$_taskDetails$_detailList$ctrl0$_row$_descriptionText" type="text" value="Investigation Week 1" maxlength="100" id="_taskTree_0__taskDetails_0__detailList_0__row_0__descriptionText_0" style="width:170px;"> As you can see above, the name attribute is 139 out of 297 characters, that's almost 50% of the tag markup taken up by that HUGE name. Does anyone have any ideas on how to stick a hook in somewhere in ASP.NET where I can somehow translate these or generate them differently; say instead of ctl00$ctl00$ctl00$_main$_main$_bodyMatterPhase$_phaseTree$ctl00$_taskTree$ctl00$_taskDetails$_detailList$ctrl0$_row$_descriptionText, it could be a GUID like 0x0AEED4B6445A11E08F873606E0D72085, which is 105 characters shorter. Any help would be greatly appreciated!

    Read the article

  • Out Of Memory error while executing mysqldump

    - by Nishaz Salam
    Hi, I am getting the following error when trying to backup a database using mysqldump from the command prompt. C:\Documents and Settings\bobC:\Adobe\LiveCycle8.2\mysql\bin\mysqldump --quick --add-locks --lock-tables -c --default-character-set=utf8 --skip-opt -pxxxx -u adobe -r C:\Adobe\LiveCycle8.2\configurationManager\working\upgrade\mysql\adobe. sql -B adobe --port=3306 --host=localhost mysqldump: Out of memory (Needed 10380928 bytes) mysqldump: Got error: 2008: MySQL client ran out of memory when retrieving data from server As you can see i am using the --quick and --skip-opt too; cannot figure out what is causing the issue. The server log has the following messages 100420 15:16:39 InnoDB: Error: cannot allocate 4814100 bytes of memory for InnoDB: a BLOB with malloc! Total allocated memory InnoDB: by InnoDB 33427880 bytes. Operating system errno: 2 InnoDB: Check if you should increase the swap file or InnoDB: ulimits of your operating system. InnoDB: On FreeBSD check you have compiled the OS with InnoDB: a big enough maximum process size. 100420 15:16:40 InnoDB: Warning: could not allocate 3814100 + 1000000 bytes to retrieve InnoDB: a big column. Table name adobe/tb_form_data Any help on this regard is highly appreciated P.S: The backup works fine without any issues when i use the MYSQL Administrator, but since an external app( adobe livecycle installer) uses the above command to backup the database during install, i need to get this working. Thanks, Nishaz Salam

    Read the article

  • "image contains error", trying to create and display images using google app engine

    - by bert
    Hello all the general idea is to create a galaxy-like map. I run into problems when I try to display a generated image. I used Python Image library to create the image and store it in the datastore. when i try to load the image i get no error on the log console and no image on the browser. when i copy/paste the image link (including datastore key) i get a black screen and the following message: The image “view-source:/localhost:8080/img?img_id=ag5kZXZ-c3BhY2VzaW0xMnINCxIHTWFpbk1hcBgeDA” cannot be displayed because it contains errors. the firefox error console: Error: Image corrupt or truncated: /localhost:8080/img?img_id=ag5kZXZ-c3BhY2VzaW0xMnINCxIHTWFpbk1hcBgeDA import cgi import datetime import urllib import webapp2 import jinja2 import os import math import sys from google.appengine.ext import db from google.appengine.api import users from PIL import Image #SNIP #class to define the map entity class MainMap(db.Model): defaultmap = db.BlobProperty(default=None) #SNIP class Generator(webapp2.RequestHandler): def post(self): #SNIP test = Image.new("RGBA",(100, 100)) dMap=MainMap() dMap.defaultmap = db.Blob(str(test)) dMap.put() #SNIP result = db.GqlQuery("SELECT * FROM MainMap LIMIT 1").fetch(1) if result: print"item found<br>" #debug info if result[0].defaultmap: print"defaultmap found<br>" #debug info string = "<div><img src='/img?img_id=" + str(result[0].key()) + "' width='100' height='100'></img>" print string else: print"nothing found<br>" else: self.redirect('/?=error') self.redirect('/') class Image_load(webapp2.RequestHandler): def get(self): self.response.out.write("started Image load") defaultmap = db.get(self.request.get("img_id")) if defaultmap.defaultmap: try: self.response.headers['Content-Type'] = "image/png" self.response.out.write(defaultmap.defaultmap) self.response.out.write("Image found") except: print "Unexpected error:", sys.exc_info()[0] else: self.response.out.write("No image") #SNIP app = webapp2.WSGIApplication([('/', MainPage), ('/generator', Generator), ('/img', Image_load)], debug=True) the browser shows the "item found" and "defaultmap found" strings and a broken imagelink the exception handling does not catch any errors Thanks for your help Regards Bert

    Read the article

  • What are the hibernate annotations used to persist a value typed Map with an enumerated type as a ke

    - by Jason Novak
    I am having trouble getting the right hibernate annotations to use on a value typed Map with an enumerated class as a key. Here is a simplified (and extremely contrived) example. public class Thing { public String id; public Letter startLetter; public Map<Letter,Double> letterCounts = new HashMap<Letter, Double>(); } public enum Letter { A, B, C, D } Here are my current annotations on Thing @Entity public class Thing { @Id public String id; @Enumerated(EnumType.STRING) public Letter startLetter; @CollectionOfElements @JoinTable(name = "Thing_letterFrequencies", joinColumns = @JoinColumn(name = "thingId")) @MapKey(columns = @Column(name = "letter", nullable = false)) @Column(name = "count") public Map<Letter,Double> letterCounts = new HashMap<Letter, Double>(); } Hibernate generates the following DDL to create the tables for my MySql database create table Thing (id varchar(255) not null, startLetter varchar(255), primary key (id)) type=InnoDB; create table Thing_letterFrequencies (thingId varchar(255) not null, count double precision, letter tinyblob not null, primary key (thingId, letter)) type=InnoDB; Notice that hibernate tries to define letter (my map key) as a tinyblob, however it defines startLetter as a varchar(255) even though both are of the enumerated type Letter. When I try to create the tables I see the following error BLOB/TEXT column 'letter' used in key specification without a key length I googled this error and it appears that MySql has issues when you try to make a tinyblob column part of a primary key, which is what hibernate needs to do with the Thing_letterFrequencies table. So I would rather have letter mapped to a varchar(255) the way startLetter is. Unfortunately, I've been fussing with the MapKey annotation for a while now and haven't been able to make this work. I've also tried @MapKeyManyToMany(targetEntity=Product.class) without success. Can anyone tell me what are the correct annotations for my letterCounts map so that hibernate will treat the letterCounts map key the same way it does startLetter?

    Read the article

  • Can any linux API or tool watch for any change in any folder below e.g. /SharedRoot or do I have to

    - by Simon B.
    I have a folder with ~10 000 subfolders. Can any linux API or tool watch for any change in any folder below e.g. /SharedRoot or do I have to setup inotify for each folder? (i.e. I loose if I want to do this for 10k+ folders). I guess yes, since I've already seen examples of this inefficient method, for instance http://twistedmatrix.com/trac/browser/trunk/twisted/internet/inotify.py?rev=28866#L345 My problem: I need to keep folders time-sorted with most recently active "project" up top. When a file changes, each folder above that file should update its last-modified timestamp to match the file. Delays are ok. Opening a file (typically MS Excel) and closing again, its file date can jump up and then down again. For this reason I need to wait until after a file is closed, then queue the folder of that file for checking, and only a while later do I go and look for the newest file in its folder, since the filedate of the triggering file could already be back-dated to its original timestamp by Excel or similar programs. Also in case several files from same folder are used/created, it makes sense to buffer timestamping of that folders' parents to at least get a bunch of updates collapsed into one delayed update. I'm looking for a linux solution. I have some code that can be run on a windows server, most of the queing functionality is here: http://github.com/sesam/FolderdateFollowsFiles/blob/master/FolderdateFollowsFiles/Follower.vb Available API:s The relative of inotify on windows, ReadDirectoryChangesW, can watch a folder and its whole subtree; see bWatchSubtree on http://msdn.microsoft.com/en-us/library/aa365465(VS.85).aspx Samba? Patching samba source is a possibility, but perhaps there are already hooks available? Other possibilities, like client side (various windows versions) and spying on file activities in order to update folders recursively?

    Read the article

  • Finding Common Phrases in MS SQL TEXT Column

    - by regex
    Hello All, Short Desc: I'm curious to see if I can use SQL Analysis services or some other MS SQL service to mine some data for me that will show commonalities between SQL TEXT fields in a dataset. Long Desc I am looking at a subset of data that consists of about 10,000 rows of TEXT blobs which are used as a notes column in a issue tracking (ticketing) software. I would like to use something out of the box (without having to build something) that might be able to parse through all of the rows and find commonly used byte sequences in the "Notes" column. In other words, I want to find commonly used phrases (two to three word phrases, so 9 - 20 character sections of the TEXT blob). This will help me better determine if associate's notes contain similar phrases (troubleshooting techniques) that we could standardize in our troubleshooting process flow. Closing Note I'd really rather not build an application to do this as my method will probably not be the most efficient way to do it. Hopefully all this makes sense. Please let me know in the comments if anything needs clarification. Thanks in advance for your help.

    Read the article

  • JSON VIEW using GROUP_CONCAT question

    - by Dan Beam
    Hey DBAs and overall smart dudes. I have a question for you. We use MySQL VIEWs to format our data as JSON when it's returned (as a BLOB), which is convenient (though not particularly nice on performance, but we already know this). But, I can't seem to get a particular query working right now (each row contains NULL when it should contain a created JSON object with the values of multiple JOINs). Here's the general idea: SELECT CONCAT( "{", "\"some_list\":[", GROUP_CONCAT( DISTINCT t1.id ), "],", "\"other_list\":[", GROUP_CONCAT( DISTINCT t2.id ), "],", "}" ) cool_json FROM table_name tn INNER JOIN ( some_table st ) ON st.some_id = tn.id LEFT JOIN ( another_table at, another_one ao, used_multiple_times t1 ) ON st.id = at.some_id AND at.different_id = ao.different_id AND ao.different_id = t1.id LEFT JOIN ( another_table2 at2, another_one2 ao2, used_multiple_times t2 ) ON st.id = at2.some_id AND at2.different_id = ao2.different_id AND ao2.different_id = t2.id GROUP BY tn.id ORDER BY tn.name Anybody know the problem here? Am I missing something I should be grouping by? It was working when I was only doing 1 LEFT JOIN & GROUP_CONCAT, but now with multiple JOINs / GROUP_CONCATs it's messing it up. When I move the GROUP_CONCATs from the "cool_json" field they work as expected, but I'd like my data formatted as JSON so I can decode it server-side or client-side in one step.

    Read the article

  • Null Safe dereferencing in Java like ?. in Groovy using Maybe monad

    - by Sathish
    I'm working on a codebase ported from Objective C to Java. There are several usages of method chaining without nullchecks dog.collar().tag().name() I was looking for something similar to safe-dereferencing operator ?. in Groovy instead of having nullchecks dog.collar?.tag?.name This led to Maybe monad to have the notion of Nothing instead of Null. But all the implementations of Nothing i came across throw exception when value is accessed which still doesn't solve the chaining problem. I made Nothing return a mock, which behaves like NullObject pattern. But it solves the chaining problem. Is there anything wrong with this implementation of Nothing? [http://github.com/sathish316/jsafederef/blob/master/src/s2k/util/safederef/Nothing.java] As far as i can see 1. It feels odd to use mocking library in code 2. It doesn't stop at the first null. 3. How do i distinguish between null result because of null reference or name actually being null? How is it distinguished in Groovy code?

    Read the article

  • QuadTrees - how to update when internal items are moving

    - by egarcia
    I've implemented a working QuadTree. It subdivides 2-d space in order to accomodate items, identified by their bounding box (x,y,width,height) on the smallest possible quad (up to a minimum area). My code is based on this implementation (mine is in Lua instead of C#) : http://www.codeproject.com/KB/recipes/QuadTree.aspx I've been able to successfully implement insertions and deletions successfully. I've turn now my attention to the update() function, since my items' position and dimensions change over time. My first implementation works, but it is quite naïve: function QuadTree:update(item) self:remove(item) return self.root:insert(item) end Yup, I basically remove and reinsert every item every time they move. This works, but I'd like to optimize it a bit more; after all, most of the time, moving items still remain on the same quadTree node most of the iterations. Is there any standard way to deal with this kind of update? In case it helps, my code is here: http://github.com/kikito/passion/blob/master/ai/QuadTree.lua I'm not looking for someone to implement it for me; pointers to an existing working implementation (even in other languages) would suffice.

    Read the article

  • Delete element from Set

    - by Blitzkr1eg
    Hi. I have 2 classes Tema(Homework) and Disciplina (course), where a Course has a Set of homeworks. In Hibernate i have mapped this to a one-to-many associations like this: <class name="model.Disciplina" table="devgar_scoala.discipline" > <id name="id" > <generator class="increment"/> </id> <set name="listaTeme" table="devgar_scoala.teme"> <key column="Discipline_id" not-null="true" ></key> <one-to-many class="model.Tema" ></one-to-many> </set> </class> <class name="model.Tema" table="devgar_scoala.teme" > <id name="id"> <generator class="increment" /> </id> <property name="titlu" type="string" /> <property name="cerinta" type="binary"> <column name="cerinta" sql-type="blob" /> </property> </class> The problem is that it will add (insert rows in the table 'Teme') but it won't delete any rows and i get no exceptions thrown. Im using the merge() method.

    Read the article

  • gdb: Cannot find new threads: generic error

    - by Alexander Gladysh
    When I run GDB against a program which loads a .so which is linked to pthreads, GDB reports error "Cannot find new threads: generic error". I probably miss something in my Ubuntu configuration (as it was installed from minimal install). Any clues? $ gdb --args lua -lluarocks.require GNU gdb (GDB) 7.0-ubuntu Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>... Reading symbols from /usr/bin/lua...(no debugging symbols found)...done. (gdb) run Starting program: /usr/bin/lua -lluarocks.require Lua 5.1.4 Copyright (C) 1994-2008 Lua.org, PUC-Rio require 'ev' [Thread debugging using libthread_db enabled] Cannot find new threads: generic error (gdb) q A debugging session is active. Inferior 1 [process 4986] will be killed. Quit anyway? (y or n) y This function gets called on require 'ev': http://github.com/brimworks/lua-ev/blob/master/lua_ev.c#L25-65 Additional information about my system: $ uname -a Linux localhost 2.6.31-20-generic #58-Ubuntu SMP Fri Mar 12 04:38:19 UTC 2010 x86_64 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 9.10 Release: 9.10 Codename: karmic

    Read the article

  • Python to C# with openSSL requirement

    - by fonix232
    Hey there again! Today I ran into a problem when I was making a new theme creator for chrome. As you may know, Chrome uses a "new" file format, called CRX, to manage it's plugins and themes. It is a basic zip file, but a bit modified: "Cr24" + derkey + signature + zipFile And here comes the problem. There are only two CRX creators, written in Ruby or Python. I don't know neither language too much (had some basic experience in Python though, but mostly with PyS60), so I would like to ask you to help me convert this python app to a C# class. Also, here is the source of crxmake.py: #!/usr/bin/python # Cribbed from http://github.com/Constellation/crxmake/blob/master/lib/crxmake.rb # and http://src.chromium.org/viewvc/chrome/trunk/src/chrome/tools/extensions/chromium_extension.py?revision=14872&content-type=text/plain&pathrev=14872 # from: http://grack.com/blog/2009/11/09/packing-chrome-extensions-in-python/ import sys from array import * from subprocess import * import os import tempfile def main(argv): arg0,dir,key,output = argv # zip up the directory input = dir + ".zip" if not os.path.exists(input): os.system("cd %(dir)s; zip -r ../%(input)s . -x '.svn/*'" % locals()) else: print "'%s' already exists using it" % input # Sign the zip file with the private key in PEM format signature = Popen(["openssl", "sha1", "-sign", key, input], stdout=PIPE).stdout.read(); # Convert the PEM key to DER (and extract the public form) for inclusion in the CRX header derkey = Popen(["openssl", "rsa", "-pubout", "-inform", "PEM", "-outform", "DER", "-in", key], stdout=PIPE).stdout.read(); out=open(output, "wb"); out.write("Cr24") # Extension file magic number header = array("l"); header.append(2); # Version 2 header.append(len(derkey)); header.append(len(signature)); header.tofile(out); out.write(derkey) out.write(signature) out.write(open(input).read()) os.unlink(input) print "Done." if __name__ == '__main__': main(sys.argv) Please could you help me?

    Read the article

  • Handling file uploads with JavaScript and Google Gears, is there a better solution?

    - by gnarf
    So - I've been using this method of file uploading for a bit, but it seems that Google Gears has poor support for the newer browsers that implement the HTML5 specs. I've heard the word deprecated floating around a few channels, so I'm looking for a replacement that can accomplish the following tasks, and support the new browsers. I can always fall back to gears / standard file POST's but these following items make my process much simpler: Users MUST to be able to select multiple files for uploading in the dialog. I MUST be able to receive status updates on the transmission of a file. (progress bars) I would like to be able to use PUT requests instead of POST I would like to be able to easily attach these events to existing HTML elements using JavaScript. I.E. the File Selection should be triggered on a <button> click. I would like to be able to control response/request parameters easily using JavaScript. I'm not sure if the new HTML5 browsers have support for the desktop/request objects gears uses, or if there is a flash uploader that has these features that I am missing in my google searches. An example of uploading code using gears: // select some files: var desktop = google.gears.factory.create('beta.desktop'); desktop.openFiles(selectFilesCallback); function selectFilesCallback(files) { $.each(files,function(k,file) { // this code actually goes through a queue, and creates some status bars // but it is unimportant to show here... sendFile(file); }); } function sendFile(file) { google.gears.factory.create('beta.httprequest'); request.open('PUT', upl.url); request.setRequestHeader('filename', file.name); request.upload.onprogress = function(e) { // gives me % status updates... allows e.loaded/e.total }; request.onreadystatechange = function() { if (request.readyState == 4) { // completed the upload! } }; request.send(file.blob); return request; } Edit: apparently flash isn't capable of using PUT requests, so I have changed it to a "like" instead of a "must".

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28  | Next Page >