Search Results

Search found 18786 results on 752 pages for 'document storage'.

Page 80/752 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Excel document incorrect format

    - by Jim
    I have a macro enabled work book and i change the name of the .xlsm file to [FileName].xlsm.zip and then i unzip i get some folders I then put these extracted folders in to another folder and zip it back and rechange the extension to the previous xlsm format i now try and open but i get an unreadable error. I am not changing any content here just extracting and zip it back. What could be the problem?

    Read the article

  • inheritance in document database?

    - by nils petersohn
    i am wondering because i searched the pdf "xxx the definitive guide" and "beginning xxx" for the word "inheritance" but i didn't find anything? am i missing something? because i am doing a tablePerHierarchy inheritance with hibernate and mysql, does that become deprecated for some reason in xxx? (replace xxx with the "not only sql" database you like)

    Read the article

  • Simplest way to flatten document to a view in RavenDB

    - by degorolls
    Given the following classes: public class Lookup { public string Code { get; set; } public string Name { get; set; } } public class DocA { public string Id { get; set; } public string Name { get; set; } public Lookup Currency { get; set; } } public class ViewA // Simply a flattened version of the doc { public string Id { get; set; } public string Name { get; set; } public string CurrencyName { get; set; } // View just gets the name of the currency } I can create an index that allows client to query the view as follows: public class A_View : AbstractIndexCreationTask<DocA, ViewA> { public A_View() { Map = docs => from doc in docs select new ViewA { Id = doc.Id, Name = doc.Name, CurrencyName = doc.Currency.Name }; Reduce = results => from result in results group on new ViewA { Id = result.Id, Name = result.Name, CurrencyName = result.CurrencyName } into g select new ViewA { Id = g.Key.Id, Name = g.Key.Name, CurrencyName = g.Key.CurrencyName }; } } This certainly works and produces the desired result of a view with the data transformed to the structure required at the client application. However, it is unworkably verbose, will be a maintenance nightmare and is probably fairly inefficient with all the redundant object construction. Is there a simpler way of creating an index with the required structure (ViewA) given a collection of documents (DocA)? FURTHER INFORMATION The issue appears to be that in order to have the index hold the data in the transformed structure (ViewA), we have to do a Reduce. It appears that a Reduce must have both a GROUP ON and a SELECT in order to work as expected so the following are not valid: INVALID REDUCE CLAUSE 1: Reduce = results => from result in results group on new ViewA { Id = result.Id, Name = result.Name, CurrencyName = result.CurrencyName } into g select g.Key; This produces: System.InvalidOperationException: Variable initializer select must have a lambda expression with an object create expression Clearly we need to have the 'select new'. INVALID REDUCE CLAUSE 2: Reduce = results => from result in results select new ViewA { Id = result.Id, Name = result.Name, CurrencyName = result.CurrencyName }; This prduces: System.InvalidCastException: Unable to cast object of type 'ICSharpCode.NRefactory.Ast.IdentifierExpression' to type 'ICSharpCode.NRefactory.Ast.InvocationExpression'. Clearly, we also need to have the 'group on new'. Thanks for any assistance you can provide. (Note: removing the type (ViewA) from the constructor calls has no effect on the above)

    Read the article

  • Find/parse server-side <?abc?>-like tags in html document

    - by Iggyhopper
    I guess I need some regex help. I want to find all tags like <?abc?> so that I can replace it with whatever the results are for the code ran inside. I just need help regexing the tag/code string, not parsing the code inside :p. <b><?abc print 'test' ?></b> would result in <b>test</b> Edit: Not specifically but in general, matching (<?[chars] (code group) ?>)

    Read the article

  • Decentralized synchronized secure data storage

    - by Alberich
    Introduction Hi, I am going to ask a question which seems utopic for me, but I need to know if there is a way to achieve what I need. And if not, I need to know why not. The idea Suppose I have a database structure, in MySql. I want to create some solution to allow anyone (no matter who, no matter where) to have a synchronized copy (updated clone) of this database (with its content) Well, and it is not going to be just one synchronized copy, it could (and should) be a multiple replication (supposing the basic, this means, for example, ten copies all over the world) And, the most important thing: It must be secure. By secure I mean only real-accepted transactions will be synchronized with all the others (no matter how many) database copies/clones. Note: Since it would be quite difficult to make the synchronization in real-time, I will design everything to make this feature dispensable. So it is not required. My auto-suggestion This is how I am thinking to manage it: Time identifiers and Updates checking: Every action (insert, update, delete...) will be stored as the action instruction itself, associated to the time identifier. [I think better than a DATETIME field, it'll be an INT one, with the number of miliseconds passed from 1st january 2013 on, for example]. So each copy is going to ask to the "neighbour copy" for new actions done since last update, and execute them after checking they are allowed. Problem 1: the "neighbour copy" could be outdated too. Solution 1: do not ask just one neighbour, create a random list with some of the copies/clones and ask them for news (I could avoid the list and ask ALL the clones for updates, but this will be inefficient if clones number ascends too much). Problem 2: Real-time global synchronization is not active. What if... Someone at CLONE_ENTERPRISING inserts a row into TABLE. ... this row goes to every clone ... Someone at CLONE_FIXEMALL deletes this row. ... and at the same time, somewhere in an outdated clone ... Someone at CLONE_DROPOUT edits this row (now inexistent at the other clones) Solution 2: easy stuff, force a GLOBAL synchronization before doing any new "depending-on-third-data action" (edit, for example). This global synch. will be unnecessary when making an INSERT, for instance. Note: Well, someone could have some fun, and make the same insert in two clones... since they're not getting updated in real-time, this row will exist twice. But, it's the same as when we have one single database, in some needed cases we check if there is an existing same-row before doing the final action. Not a problem. Problem 3: It is possible to edit the code and do not filter actions, so someone could spread instructions to delete everything, or just make some trolling activity. This is not a problem, since good clones will always be somewhere. Those who got bad won't interest anymore. I really appreciate if you read. I know this is not the perfect solution, it has possibly hundred of holes, but it is my basic start. I will now appreciate anything you can teach me now. Thanks a lot. PS.: It could be that all this I am trying already exists and has its own name. Sorry for asking then (I'd anyway thank this name, if it exists)

    Read the article

  • Password hashing, salt and storage of hashed values

    - by Jonathan Leffler
    Suppose you were at liberty to decide how hashed passwords were to be stored in a DBMS. Are there obvious weaknesses in a scheme like this one? To create the hash value stored in the DBMS, take: A value that is unique to the DBMS server instance as part of the salt, And the username as a second part of the salt, And create the concatenation of the salt with the actual password, And hash the whole string using the SHA-256 algorithm, And store the result in the DBMS. This would mean that anyone wanting to come up with a collision should have to do the work separately for each user name and each DBMS server instance separately. I'd plan to keep the actual hash mechanism somewhat flexible to allow for the use of the new NIST standard hash algorithm (SHA-3) that is still being worked on. The 'value that is unique to the DBMS server instance' need not be secret - though it wouldn't be divulged casually. The intention is to ensure that if someone uses the same password in different DBMS server instances, the recorded hashes would be different. Likewise, the user name would not be secret - just the password proper. Would there be any advantage to having the password first and the user name and 'unique value' second, or any other permutation of the three sources of data? Or what about interleaving the strings? Do I need to add (and record) a random salt value (per password) as well as the information above? (Advantage: the user can re-use a password and still, probably, get a different hash recorded in the database. Disadvantage: the salt has to be recorded. I suspect the advantage considerably outweighs the disadvantage.) There are quite a lot of related SO questions - this list is unlikely to be comprehensive: Encrypting/Hashing plain text passwords in database Secure hash and salt for PHP passwords The necessity of hiding the salt for a hash Clients-side MD5 hash with time salt Simple password encryption Salt generation and Open Source software I think that the answers to these questions support my algorithm (though if you simply use a random salt, then the 'unique value per server' and username components are less important).

    Read the article

  • Database storage for high sample rate data in web app

    - by Jim
    I've got multiple sensors feeding data to my web app. Each channel is 5 samples per second and the data gets uploaded bundled together in 1 minute json messages (containing 300 samples). The data will be graphed using flot at multiple zoom levels from 1 day to 1 minute. I'm using Amazon SimpleDB and I'm currently storing the data in the 1 minute chunks that I receive it in. This works well for high zoom levels, but for full days there will be simply be too many rows to retrieve. The idea I've currently got is that every hour I can crawl through the data and collect together 300 samples for the last hour and store them in an hour Domain (table if you like). Does this sound like a reasonable solution? How have others implemented the same sort of systems?

    Read the article

  • how to check if internal storage file has any data

    - by user3720291
    public class Save extends Activity { int levels = 2; int data_block = 1024; //char[] data = new char[] {'0', '0'}; String blankval = "0"; String targetval = "0"; String temp; String tempwrite; String string = "null"; TextView tex1; TextView tex2; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.save); Intent intent = getIntent(); Bundle b = intent.getExtras(); tex1 = (TextView) findViewById(R.id.textView1); tex2 = (TextView) findViewById(R.id.textView2); if(b!=null) { string =(String) b.get("string"); } loadprev(); save(); } public void save() { if (string.equals("Blank")) blankval = "1"; if (string.equals("Target")) targetval = "1"; temp = blankval + targetval; try { FileOutputStream fos = openFileOutput("data.gds", MODE_PRIVATE); fos.write(temp.getBytes()); fos.close(); } catch (FileNotFoundException e) {e.printStackTrace();} catch (IOException e) {e.printStackTrace();} tex1.setText(blankval); tex2.setText(targetval); } public void loadprev() { String final_data = ""; try { FileInputStream fis = openFileInput("data.gds"); InputStreamReader isr = new InputStreamReader(fis); char[] data = new char[data_block]; int size; while((size = isr.read(data))>0) { String read_data = String.copyValueOf(data, 0, size); final_data += read_data; data = new char[data_block]; } } catch (FileNotFoundException e) {e.printStackTrace();} catch (IOException e) {e.printStackTrace();} char[] tempread = final_data.toCharArray();; blankval = "" + tempread[0]; targetval = "" + tempread[1]; } } After much tinkering i have finally managed to get my save/load function to work, but it does have an error, pretty much i got it to work then i did a fresh reintall deleting data.gds, afterwards the save/load function crashes because the data.gds file has no previous values. can i use a if statment to check if data.gds has any values in it, if so how do i do it and if not, then what could i use instead?

    Read the article

  • How to classify NN/NNP/NNS obtained from POS tagged document as a product feature

    - by Shweta .......
    I'm planning to perform sentiment analysis on reviews of product features (collected from Amazon dataset). I have extracted review text from the dataset and performed POS tagging on that. I'm able to extract NN/NNP as well. But my doubt is how do I come to know that extracted words classify as features of the products? I know there are classifiers in nltk but I don't know how I should use it for my project. I'm assuming there are 2 ways of finding whether the extracted word is a product feature or not. One is to compare with a bag of words and find out if my word exists in that. Doubt: How do I create/get bag of words? Second way is to implement some kind of apriori algorithm to find out frequently occurring words as features. I would like to know which method is good and how to go about implementing it. Some pointers to available softwares or code snippets would be helpful! Thanks!

    Read the article

  • Tools to document/visualize call graph?

    - by Dave Griffiths
    Hi, having recently joined a project with a vast amount of code to get to grips with, I would like to start documenting and visualizing some of the flows through the call graph to give me a better understanding of how everything fits together. This is what I would like to see in my ideal tool: every node is a function/method nodes are connected if one function can call another optional square box in between detailing conditions under which call is made (or a label icon you can hover over like a tooltip) also icon on edge describing parameters hover over node and description is displayed optional icons for node to display pseudo code scenario/domain view - display subset of complete diagram for particular use-case slide view mode - for each frame, the currently executing function is highlighted plenty of options over what to display to reduce on-screen clutter the interactive use of such a tool is key, I'm not looking for a Graphviz type solution because there would be too much clutter. The ability to form a view of a subset of the entire graph would be very handy (maybe with the unimportant clutter greyed out). Don't need automatic generation from source code, happy to enter it manually. Almost like a mind-map. Does that make sense? If you are not aware of such a tool, do you also think it would be useful? (Just in case I decide to go and scratch that itch one day!)

    Read the article

  • Storage location of yellow-blue shield icon

    - by gencha
    Where, in Windows, is this icon stored? I need to use it in a TaskDialog emulation for XP and am having a hard time tracking it down. It's not in shell32.dll, explorer.exe, ieframe.dll or wmploc.dll (as these contain a lot of icons commonly used in Windows).

    Read the article

  • How to copy newly added document with metadata to another document library?

    - by James
    I need to copy the item that user just added (for example. myresume.doc or financial.xls) with the metadata (doc lib obtains the columns from content type, ct obtains the columns from site colum) and copy the item with metadata in a folder called "NativeFile". Every doc library has this folder. I know itemadded can be used but then I heard itemadded fires before user have a chance to complete the metadata for the item they just added. What are my options? (new to sp, so some sample code would greatly help. or some good link similar to this issue) Sharepoint 2007, itemadded or itemadding or itemupdating or itemupdated....

    Read the article

  • Image storage as a service

    - by Samuel
    Google App Engine provides a image API for storing / retrieving images. We are currently not in a position to deploy our application on top of App Engine because of limitations in the java frameworks (jboss seam 2.2.0) we are using to build our j2ee application. We would eventually want to deploy our production application on top of Google App Engine, but what are the short term options (java based open source products) which provides comparable functionality to Google App Engine's Image API and will have an easier migration path at a later point in time.

    Read the article

  • Visual Studio 2010 "Not enough storage is available to process this command"

    - by Daniel Perez
    I'm fighting with VS 2010 and this error that seems to be very common in previous versions, but it looks like not everyone is having it in the latest version. I've got VS 2010 SP1 and I'm getting this error quite often. The problem is that it's not even enough to restart VS in order to make it go away, I usually have to restart my pc, and i'm losing a lot of time doing this (it's quite frequent) I've got Windows 7 32bits (can't upgrade to 64 bits, the company doesn't allow it), and I can't do things like creating another solution (please don't reply this :) ) I've used the command to make devenv.exe LARGEADDRESSAWARE, but the error keeps on happening My virtual memory size is set to automatic, and the weird thing is that VS doesn't even take 2gb of ram, so I don't know if the error is really because it's lacking memory, or if it's some bug in the program any ideas, things to try, something?

    Read the article

  • Nested hyperlinks in XHTML 1.1 document

    - by Nazgulled
    Hi, I'm doing a simple widget for WordPress that fetches the most recent tweets from the RSS feed provided by Twitter. This widget parses any link posted on a tweet, it also parses mentions (ie: @username) and trending topics (ie: #nowplaying). For these 3 situations, it creates links pointing to some Twitter feature. For instance: "Hi @UserA, check out the song Foo from FooBar that I'm listening, it's awesome. #nowplaying" And it will parse into this: Hi <a href="http://twitter.com/UserA">@UserA</a>, check out the song Foo from FooBar that I'm listening, it's awesome. <a href="http://twitter.com/#search?q=nowplaying">#nowplaying</a> Now now I need to add a global link to the whole message, like this: <a href="http://twitter.com/UserA/statuses/1234567890"> Hi <a href="http://twitter.com/UserA">@UserA</a>, check out the song Foo from FooBar that I'm listening, it's awesome. <a href="http://twitter.com/#search?q=nowplaying">#nowplaying</a> </a> But this code does not validate and it doesn't work anyways (the browsers don't really seem to know what to do with it). Any suggestions how could I fix this?

    Read the article

  • How to programatically search a PDF document in c#

    - by Nathan Reed
    I have a need to search a pdf file to see if a certin string is present. The string in question is definately encoded as text (ie. it is not an image or anything). I have tried just searching the file as though it was plain text, but this does not work. Is it possible to do this? Are there any librarys out there for .net2.0 that will extract/decode all the text out of pdf file for me?

    Read the article

  • Implementing a logging library in .NET with a database as the storage medium

    - by Dave
    I'm just starting to work on a logging library that everyone can use to keep track of any sort of system information while the user is running our application. The simplest example so far is to track Info, Warnings, and Errors. I want all plugins to be able to use this feature, but since each developer might have a different idea of what's important to report, I want to keep this as generic as possible. In the C++ world, I would normally use something like a stl::pair<string,string> to act as a key value pair structure, and have a stl::list of these to act as a "row" in the log. The log cache would then be a list<list<pair<string,string>>> (ugh!). This way, the developers can use a const string key like INFO, WARNING, ERROR to have a consistent naming for a column in the database (for SELECTing specific types of information). I'd like the database to be able to deal with any number of distinct column names. For example, John might have an INFO row with a column called USER, and Bill might have an INFO row with a column called FILENAME. I want the log viewer to be able to display all information, and if one report doesn't have a value for INFO / FILENAME, those fields should just appear blank. So one option is to use List<List<KeyValuePair<String,String>>, and the another is to have the log library consumer somehow "register" its schema, and then have the database do an ALTER TABLE to handle this situation. Yet another idea is to have a table that's just for key value pairs, with a foreign key that maps the key value pairs back to the original log entry. I obviously don't want logging to bog down the system, so I only lock the log cache to make a copy of the data (and remove the already-copied data), then a background thread will dump the information to the database. My specific questions regarding this are: Do you see any performance issues? In other words, have you ever tried something like this and found that certain things just don't work well in practice? Is there a more .NETish way to implement the key value pairs, other than List<List<KeyValuePair<String,String>>>? Even if there is a way to do #2 better, is the ALTER TABLE idea I proposed above a Bad Thing? Would you recommend multiple databases over a single one? I don't yet have an idea of how frequently the log would get written to, but we ideally would like to have lots of low level information. Perhaps there should be a DB with a fixed schema only for the low level stuff, and then another DB that's more flexible for reporting information back to users.

    Read the article

  • MongoDB query to return only embedded document

    - by Matt
    assume that i have a BlogPost model with zero-to-many embedded Comment documents. can i query for and have MongoDB return only Comment objects matching my query spec? eg, db.blog_posts.find({"comment.submitter": "some_name"}) returns only a list of comments. edit: an example: import pymongo connection = pymongo.Connection() db = connection['dvds'] db['dvds'].insert({'title': "The Hitchhikers Guide to the Galaxy", 'episodes': [{'title': "Episode 1", 'desc': "..."}, {'title': "Episode 2", 'desc': "..."}, {'title': "Episode 3", 'desc': "..."}, {'title': "Episode 4", 'desc': "..."}, {'title': "Episode 5", 'desc': "..."}, {'title': "Episode 6", 'desc': "..."}]}) episode = db['dvds'].find_one({'episodes.title': "Episode 1"}, fields=['episodes']) in this example, episode is: {u'_id': ObjectId('...'), u'episodes': [{u'desc': u'...', u'title': u'Episode 1'}, {u'desc': u'...', u'title': u'Episode 2'}, {u'desc': u'...', u'title': u'Episode 3'}, {u'desc': u'...', u'title': u'Episode 4'}, {u'desc': u'...', u'title': u'Episode 5'}, {u'desc': u'...', u'title': u'Episode 6'}]} but i just want: {u'desc': u'...', u'title': u'Episode 1'}

    Read the article

  • Questions about using git as a backend storage system

    - by XO
    New to git here... I want to commit my personal file share to a git repo (text, docs, images etc). As I make modifications to various files over time, telling git about them along the way, how do go about things so I can: Get out of the business of traditional fulls/incrementals. Be able to do a point-in-time file or full clone restore. Basically, I want something granular, such that, if I make an edit to a file 5 times on a particular day. I will have 5 versions of that file that I can refer back to- forever. Or even just derive the a full copy of everything the way it looked on that particular day. I am currently using rsync for remote incremental syncs (no file versioning).

    Read the article

  • .Net 4.0 Memory-Mapped Files verses RDMS Storage

    - by Harry
    I'm interested in people's thoughts comparing storing data in a traditional SQL based Database or utilising a Memory-Mapped File such as the one in the new .Net 4.0 runtime. The data in question would be arrays of simple structures. Obvious pros and cons: SQL Database Pros Adhoc query support SQL Management Tools Schema changes (adding more columns and setting default values) Memory-Mapped Pros Lighter overhead? (this is an assumption on my part) Shareable between process threads Any others? Is it worth it for performance gains?

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >