Search Results

Search found 6879 results on 276 pages for 'azure storage blobs'.

Page 253/276 | < Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >

  • JS: Storing dynamic variables across pages?

    - by user2467599
    I've been looking into local storage options and plugins like Persist.js, sessvars.js, and even sisyphus.js - but I am unsure if any are the best fit (though I'm fairly certain I need to use one). Page one is a form with input fields for data like names, phones, and email. I have a button that replicates a wrapper div (and it's inputs) as long as more inputs are needed. When the form is filled the user hits submit which takes them to a 'confirmation' type php page. I need to the give the user an 'edit' button on page 2 that takes them back to page 1 and leaves all the info alone. For the most part everything returns fine, but if the user had hit the 'replicate' button before submission, and then hits edit afterwards, all the inputs that were dynamically generated return empty and the div no longer exists. Someone suggested that my variables are not persistent (when the replicate button is hit, input with an id="name1" becomes "name2" and so on) so that's when I found out about the plugins mentioned before. Is there a way that I can implement one of those plugins (or any other method) so that when the user returns to page one the div and it's input values remain unchanged? And if I'm on the right track are there any examples?

    Read the article

  • Chrome Extension - Console Log not firing

    - by coffeemonitor
    I'm starting to learn to make my own Chrome Extensions, and starting small. At the moment, I'm switching from using the alert() function to console.log() for a cleaner development environment. For some reason, console.log() is not displaying in my chrome console logs. However, the alert() function is working just fine. Can someone review my code below and perhaps tell me why console.log() isn't firing as expected? manifest.json { "manifest_version": 2, "name": "Sandbox", "version": "0.2", "description": "My Chrome Extension Playground", "icons": { "16": "imgs/16x16.png", "24": "imgs/24x24.png", "32": "imgs/32x32.png", "48": "imgs/48x48.png" }, "background": { "scripts": ["js/background.js"] }, "browser_action": { "default_title": "My Fun Sandbox Environment", "default_icon": "imgs/16x16.png" }, "permissions": [ "background", "storage", "tabs", "http://*/*", "https://*/*" ] } js/background.js function click(e) { alert("this alert certainly shows"); console.log("But this does not"); } // Fire a function, when icon is clicked chrome.browserAction.onClicked.addListener(click); As you can see, I kept it very simple. Just the manifest.json and a background.js file with an event listener, if the icon in the toolbar is clicked. As I mentioned, the alert() is popping up nicely, while the console.log() appears to be ignored.

    Read the article

  • Stable/repeatable random sort (MySQL, Rails)

    - by Matt Rogish
    I'd like to paginate through a randomly sorted list of ActiveRecord models (rows from MySQL database). However, this randomization needs to persist on a per-session basis, so that other people that visit the website also receive a random, paginate-able list of records. Let's say there are enough entities (tens of thousands) that storing the randomly sorted ID values in either the session or a cookie is too large, so I must temporarily persist it in some other way (MySQL, file, etc.). Initially I thought I could create a function based on the session ID and the page ID (returning the object IDs for that page) however since the object ID values in MySQL are not sequential (there are gaps), that seemed to fall apart as I was poking at it. The nice thing is that it would require no/minimal storage but the downsides are that it is likely pretty complex to implement and probably CPU intensive. My feeling is I should create an intersection table, something like: random_sorts( sort_id, created_at, user_id NULL if guest) random_sort_items( sort_id, item_id, position ) And then simply store the 'sort_id' in the session. Then, I can paginate the random_sorts WHERE sort_id = n ORDER BY position LIMIT... as usual. Of course, I'd have to put some sort of a reaper in there to remove them after some period of inactivity (based on random_sorts.created_at). Unfortunately, I'd have to invalidate the sort as new objects were created (and/or old objects being removed, although deletion is very rare). And, as load increases the size/performance of this table (even properly indexed) drops. It seems like this ought to be a solved problem but I can't find any rails plugins that do this... Any ideas? Thanks!!

    Read the article

  • How to make chrome.tabs.update works with content script

    - by user1673772
    I work on a little extension on Google Chrome, I want to create a new tab, go on the url "sample"+i+".com", launch a content script on this url, update the current tab to "sample"+(i+1)+".com", and launch the same script. I looked the Q&A available on stackoverflow and I google it but I didn't found a solution who works. This is my actually code of background.js (it works), it creates two tabs (i=21 and i=22) and load my content script for each url, when I tried to do a chrome.tabs.update Chrome launchs directly a tab with i = 22 (and the script works only one time) : function extraction(tab) { for (var i =21; i<23;i++) { chrome.storage.sync.set({'extraction' : 1}, function() {}); //for my content script chrome.tabs.create({url: "http://example.com/"+i+".html"}, function() {}); } } chrome.browserAction.onClicked.addListener(function(tab) {extraction(tab);}); If anyone can help me, the content script and manifest.json are not the problem. I want to make that 15000 times so I can't do otherwise. Thank you.

    Read the article

  • template specialization for static member functions; howto?

    - by Rolle
    I am trying to implement a template function with handles void differently using template specialization. The following code gives me an "Explicit specialization in non-namespace scope" in gcc: template <typename T> static T safeGuiCall(boost::function<T ()> _f) { if (_f.empty()) throw GuiException("Function pointer empty"); { ThreadGuard g; T ret = _f(); return ret; } } // template specialization for functions wit no return value template <> static void safeGuiCall<void>(boost::function<void ()> _f) { if (_f.empty()) throw GuiException("Function pointer empty"); { ThreadGuard g; _f(); } } I have tried moving it out of the class (the class is not templated) and into the namespace but then I get the error "Explicit specialization cannot have a storage class". I have read many discussions about this, but people don't seem to agree how to specialize function templates. Any ideas?

    Read the article

  • Use `require()` with `node --eval`

    - by rentzsch
    When utilizing node.js's newish support for --eval, I get an error (ReferenceError: require is not defined) when I attempt to use require(). Here's an example of the failure: $ node --eval 'require("http");' undefined:1 ^ ReferenceError: require is not defined at eval at <anonymous> (node.js:762:36) at eval (native) at node.js:762:36 $ Here's a working example of using require() typed into the REPL: $ node > require("http"); { STATUS_CODES: { '100': 'Continue' , '101': 'Switching Protocols' , '102': 'Processing' , '200': 'OK' , '201': 'Created' , '202': 'Accepted' , '203': 'Non-Authoritative Information' , '204': 'No Content' , '205': 'Reset Content' , '206': 'Partial Content' , '207': 'Multi-Status' , '300': 'Multiple Choices' , '301': 'Moved Permanently' , '302': 'Moved Temporarily' , '303': 'See Other' , '304': 'Not Modified' , '305': 'Use Proxy' , '307': 'Temporary Redirect' , '400': 'Bad Request' , '401': 'Unauthorized' , '402': 'Payment Required' , '403': 'Forbidden' , '404': 'Not Found' , '405': 'Method Not Allowed' , '406': 'Not Acceptable' , '407': 'Proxy Authentication Required' , '408': 'Request Time-out' , '409': 'Conflict' , '410': 'Gone' , '411': 'Length Required' , '412': 'Precondition Failed' , '413': 'Request Entity Too Large' , '414': 'Request-URI Too Large' , '415': 'Unsupported Media Type' , '416': 'Requested Range Not Satisfiable' , '417': 'Expectation Failed' , '418': 'I\'m a teapot' , '422': 'Unprocessable Entity' , '423': 'Locked' , '424': 'Failed Dependency' , '425': 'Unordered Collection' , '426': 'Upgrade Required' , '500': 'Internal Server Error' , '501': 'Not Implemented' , '502': 'Bad Gateway' , '503': 'Service Unavailable' , '504': 'Gateway Time-out' , '505': 'HTTP Version not supported' , '506': 'Variant Also Negotiates' , '507': 'Insufficient Storage' , '509': 'Bandwidth Limit Exceeded' , '510': 'Not Extended' } , IncomingMessage: { [Function: IncomingMessage] super_: [Function: EventEmitter] } , OutgoingMessage: { [Function: OutgoingMessage] super_: [Function: EventEmitter] } , ServerResponse: { [Function: ServerResponse] super_: [Circular] } , ClientRequest: { [Function: ClientRequest] super_: [Circular] } , Server: { [Function: Server] super_: { [Function: Server] super_: [Function: EventEmitter] } } , createServer: [Function] , Client: { [Function: Client] super_: { [Function: Stream] super_: [Function: EventEmitter] } } , createClient: [Function] , cat: [Function] } > Is there a way to use require() with node's --eval? I'm on node 0.2.6 on Mac OS X 10.6.5.

    Read the article

  • Why use shorter VARCHAR(n) fields?

    - by chryss
    It is frequently advised to choose database field sizes to be as narrow as possible. I am wondering to what degree this applies to SQL Server 2005 VARCHAR columns: Storing 10-letter English words in a VARCHAR(255) field will not take up more storage than in a VARCHAR(10) field. Are there other reasons to restrict the size of VARCHAR fields to stick as closely as possible to the size of the data? I'm thinking of Performance: Is there an advantage to using a smaller n when selecting, filtering and sorting on the data? Memory, including on the application side (C++)? Style/validation: How important do you consider restricting colunm size to force non-sensical data imports to fail (such as 200-character surnames)? Anything else? Background: I help data integrators with the design of data flows into a database-backed system. They have to use an API that restricts their choice of data types. For character data, only VARCHAR(n) with n <= 255 is available; CHAR, NCHAR, NVARCHAR and TEXT are not. We're trying to lay down some "good practices" rules, and the question has come up if there is a real detriment to using VARCHAR(255) even for data where real maximum sizes will never exceed 30 bytes or so. Typical data volumes for one table are 1-10 Mio records with up to 150 attributes. Query performance (SELECT, with frequently extensive WHERE clauses) and application-side retrieval performance are paramount.

    Read the article

  • What are the linkage of the following functions?

    - by Derui Si
    When I was reading the c++ 03 standard (7.1.1 Storage class specifiers [dcl.stc]), there are some examples as below, I'm not able to tell how the linkage of each successive declarations is determined? Could anyone help here? Thanks in advance! static char* f(); // f() has internal linkage char* f() { /* ... */ } // f() still has internal linkage char* g(); // g() has external linkage static char* g() { /* ... */ } // error: inconsistent linkage void h(); inline void h(); // external linkage inline void l(); void l(); // external linkage inline void m(); extern void m(); // external linkage static void n(); inline void n(); // internal linkage static int a; // a has internal linkage int a; // error: two definitions static int b; // b has internal linkage extern int b; // b still has internal linkage int c; // c has external linkage static int c; // error: inconsistent linkage extern int d; // d has external linkage static int d; // error: inconsistent linkage UPD: Additionally, how can I understand the statement in the standard, " The linkages implied by successive declarations for a given entity shall agree. That is, within a given scope, each declaration declaring the same object name or the same overloading of a function name shall imply the same linkage. Each function in a given set of overloaded functions can have a different linkage, however."

    Read the article

  • MySQL Join/Comparison on a DATETIME column (<5.6.4 and > 5.6.4)

    - by Simon
    Suppose i have two tables like so: Events ID (PK int autoInc), Time (datetime), Caption (varchar) Position ID (PK int autoinc), Time (datetime), Easting (float), Northing (float) Is it safe to, for example, list all the events and their position if I am using the Time field as my joining criteria? I.e.: SELECT E.*,P.* FROM Events E JOIN Position P ON E.Time = P.Time OR, even just simply comparing a datetime value (taking into consideration that the parameterized value may contain the fractional seconds part - which MySQL has always accepted) e.g. SELECT E.* FROM Events E WHERE E.Time = @Time I understand MySQL (before version 5.6.4) only stores datetime fields WITHOUT milliseconds. So I would assume this query would function OK. However as of version 5.6.4, I have read MySQL can now store milliseconds with the datetime field. Assuming datetime values are inserted using functions such as NOW(), the milliseconds are truncated (<5.6.4) which I would assume allow the above query to work. However, with version 5.6.4 and later, this could potentially NOT work. I am, and only ever will be interested in second accuracy. If anyone could answer the following questions would be greatly appreciated: In General, how does MySQL compare datetime fields against one another (consider the above query). Is the above query fine, and does it make use of indexes on the time fields? (MySQL < 5.6.4) Is there any way to exclude milliseconds? I.e. when inserting and in conditional joins/selects etc? (MySQL 5.6.4) Will the join query above work? (MySQL 5.6.4) EDIT I know i can cast the datetimes, thanks for those that answered, but i'm trying to tackle the root of the problem here (the fact that the storage type/definition has been changed) and i DO NOT want to use functions in my queries. This negates all my work of optimizing queries applying indexes etc, not to mention having to rewrite all my queries. EDIT2 Can anyone out there suggest a reason NOT to join on a DATETIME field using second accuracy?

    Read the article

  • NServiceBus & MSMQ: How To Change the Default Permissions on the Queue?

    - by Amy T
    My team is on our first attempt at using NServiceBus (v2.0), using MSMQ as the backing storage. We're getting stuck on queue permissions. We're using it in a Web Forms application, where the user account the website runs under is not an administrator on the machine. When NServiceBus creates the MSMQ queue, it gives the local administrators group full control, and the local everyone and anonymous groups permissions to send messages. But then later, as part of initializing the queue, NServiceBus tries to read all of its messages. That's where we run into the permissions error. Since the website isn't running as an administrator, it's not allowed to read messages. How are other people dealing with this? Do your applications run as administrators? Or do you create the MSMQ queue in your code first, giving it the permissions you need, so that NServiceBus doesn't have to create it? Or is there a bit of configuration we're missing? Or are we likely writing our code that uses NServiceBus incorrectly to be running into this?

    Read the article

  • Implementing IEnumeralbe on Non-Listed Items

    - by Stacey
    I have a class that contains a static number of objects. This class needs to be frequently 'compared' to other classes that will be simple List objects. public partial class Sheet { public Item X{ get; set; } public Item Y{ get; set; } public Item Z{ get; set; } } the items are obviously not going to be "X" "Y" "Z", those are just generic names for example. The problem is that due to the nature of what needs to be done, a List won't work; even though everything in here is going to be of type Item. It is like a checklist of very specific things that has to be tested against in both code and runtime. This works all fine and well; it isn't my issue. My issue is iterating it. For instance I want to do the following... List<Item> UncheckedItems = // Repository Logic Here. UncheckedItems contains all available items; and the CheckedItems is the Sheet class instance. CheckedItems will contain items that were moved from Unchecked to Checked; however due to the nature of the storage system, items moved to Checked CANNOT be REMOVED from Unchecked. I simply want to iterate through "Checked" and remove anything from the list in Unchecked that is already in "Checked". So naturally, that would go like this with a normal list. foreach(Item item in Unchecked) { if( Checked.Contains(item) ) Unchecked.Remove( item ); } But since "Sheet" is not a 'List', I cannot do that. So I wanted to implement IEnumerable so that I could. Any suggestions? I've never implemented IEnumerable directly before and I'm pretty confused as to where to begin.

    Read the article

  • What kind of online hosting do I need for a WCF-based service?

    - by mafutrct
    First of all, I'm not sure if SO is the right place to ask. Please migrate me if needed. I would like to host a WCF-based service so it is available for everyone. While hosting it on my personal, local servers succeeded, I would prefer to move it to an external service provider for various reasons. I'll be blunt: I have no clue about hosting providers. I know there are webhosters, virtual and root servers and several other services. What I would like to know is what kind of hosting I need in my case. I understand that a root server would easily fulfill my requirements, but that is not exactly cheap. The program I'd like to run on the server requires .NET 4, preferably on a windows machine. Access to a folder in the file system is much appreciated (1 GB storage is enough by far). Communication with clients (in form of an applications written in .NET) via opening a port on the server. Traffic is low (<<1 GB/month?) There is no website. Having the provider perform updates would be nice. My understanding is that a virtual server would be a possible solution. Prices seem start at around 5€/month, which is ok for me. However, I read that for these cheap solutions RAM is severely limited (~400 MB), and I'm not confident that is enough to run windows and a .NET application.

    Read the article

  • Which design pattern fits - strategy makes sense ?

    - by user554833
    --Bump *One desperate try to get someone's attention I have a simple database table that stores list of users who have subscribed to folders either by email OR to show up on the site (only on the web UI). In the storage table this is controlled by a number(1 - show on site 2- by email). When I am showing in UI I need to show a checkbox next to each of folders for which the user has subscribed (both email & on site). There is a separate table which stores a set of default subscriptions which would apply to each user if user has not expressed his subscription. This is basically a folder ID and a virtual group name. But, Email subscriptions do not count for applying these default groups. So if no "on site" subscription apply default group. Thats the rule. How about a strategy pattern here (Pseudo code) Interface ISubscription public ArrayList GetSubscriptionData(Pass query object) Public class SubscriptionWithDefaultGroup Implement ArrayList GetSubscriptionData(Pass query object) Public class SubscriptionWithoutDefaultGroup Implement ArrayList GetSubscriptionData(Pass query object) Public class SubscriptionOnlyDefaultGroup Implement ArrayList GetSubscriptionData(Pass query object) does this even make sense? I would be more than glad for receive any criticism / help / notes. I am learning. Cheers

    Read the article

  • What are provenly scalable data persistence solutions for consumer profiles?

    - by Hubbard
    Consumer profiles with analytical scores [ConsumerID, 1..n demographical variables, 1...n analytical scores e.g. "likely to churn" "likely to buy an item 100$ in worth" etc.] have to be possible to query fast if they are to be used in customizing web-sites, consumer communications etc. Well. If you have: Large number of consumers Large profiles with a huge set of variables (as profiles describing human behaviour are likely to be..) ...you are in trouble. If you really have a physical relational database to which you target a query and then a physical disk starts to rotate someplace to give you an individual profile or a set of profiles, the profile user (a web site customizing a page, a recommendation engine making a recommendation..) has died of boredom before getting any observable results. There is the possibility of having the profiles in memory, which would of course increase the performance hugely. What are the most proven solutions for a fast-response, scalable consumer profile storage? Is there a shootout of these someplace?

    Read the article

  • How do I display core data on second view controller?

    - by jon
    I am working on my first core data iPhone application. I am using a navigation controller, and the root view controller displays 4 rows. Clicking the first row takes me to a second table view controller. However, when I click the back button, repeat the row tap, click the back button again, and tap the row a third time, I get an error. I have been researching this for a week with no success. I can reproduce the error easily: Create a new Navigation-based Application, use Core Data for storage, call it MyTest which creates MyTestAppDelegate and RootViewController. Add new UIViewController subclass, with UITableViewController and xib, call it ListViewController. Copy code from RootViewController.h and .m to ListViewController.h and .m., changing the file names appropriately. To simplify the code, I removed the trailing “_” from all variables. In RootViewController, I added #import ListViewController.h, set up an array to display 4 rows and navigate to ListViewController when clicking the first row. In ListViewController.m, I added #import MyTestAppDelegate.h” and the following code: - (void)viewDidLoad { [super viewDidLoad]; if (managedObjectContext == nil) { managedObjectContext = [(MyTestAppDelegate *)[[UIApplication sharedApplication] delegate] managedObjectContext]; } .. } The sequence that causes the error is tap row, return, tap row, return, tap row - error. managedObjectContext is synthesized for the third time. I appreciate your patience and your help, as this makes no sense to me. ADDENDUM: I may have a partial solution. http://www.iphonedevsdk.com/forum/iphone-sdk-development/41688-accessing-app-delegates-managed-object-context.html If I do not release the managedObjectContext in the .m file, the error goes away. Is that ok or will that cause me issues? - (void)dealloc { [fetchedResultsController release]; // [managedObjectContext release]; [super dealloc]; } ADDENDUM 2: See solution below. Sorry for the formatting issues - this was my first post.

    Read the article

  • Java - multithreaded access to a local value store which is periodically cleared

    - by Telax
    I'm hoping for some advice or suggestions on how best to handle multi threaded access to a value store. My local value storage is designed to hold onto objects which are currently in use. If the object is not in use then it is removed from the store. A value is pumped into my store via thread1, its entry into the store is announced to listeners, and the value is stored. Values coming in on thread1 will either be totally new values or updates for existing values. A timer is used to periodically remove any value from the store which is not currently in use and so all that remains of this value is its ID held locally by an intermediary. Now, an active element on thread2 may wake up and try to access a set of values by passing a set of value IDs which it knows about. Some values will be stored already (great) and some may not (sadface). Those values which are not already stored will be retrieved from an external source. My main issue is that items which have not already been stored and are currently being queried for may arrive in on thread1 before the query is complete. I'd like to try and avoid locking access to the store whilst a query is being made as it may take some time.

    Read the article

  • How to store and synchronize a big list of strings

    - by Joel
    I have a large database table in SQLExpress on Windows, with a particular field of interest 'code'. I have an Apache web server with MySQL on Linux. The web application on the Linux box needs access to the list of all codes. The only thing it will use the list for is checking for the existence of a given code. Having the Linux server call out to the Windows server is impractical as the Windows server is behind a NAT'ed office internet connection, and it may not always be accessible. I have set it so the Windows server will push the list of codes to the web server by means of a simple HTTP POST request. However, at this point I have not implemented the storage of the codes on the Linux box. Should I store them in a MySQL table with a single field 'code'? Then I get fast indexed lookups O(1), however I think synchronization will be an issue - given an updated list of codes, pushed from the Windows box, how would I optimally synchronize the list with the database? TRUNCATE, followed by INSERT? Should I instead store them in a flat file? Then I have O(n) look up time rather than O(1). Additionally an extra constant-time overhead too, as I will be processing the file in Ruby. However, synchronization is easy - simply replace the file.

    Read the article

  • How do I deal with different requests that map to the same response?

    - by daxim
    I'm designing a Web service. The request is idempotent, so I chose the GET method. The response is relatively expensive to calculate and not small, so I want to get caching (on the protocol level) right. (Don't worry about memoisation at my part, I have that already covered; my question here is actually also paying attention to the Web as a whole.) There's only one mandatory parameter and a number of optional parameter with default values if missing. For example, the following two map to the same representation of the response. (If this is a dumb way to go about it the interface, propose something better.) GET /service?mandatory_parameter=some_data HTTP/1.1 GET /service?mandatory_parameter=some_data;optional_parameter=default1;another_optional_parameter=default2;yet_another_optional_parameter=default3 HTTP/1.1 However, I imagine clients do not know this and would treat them separate and therefore waste cache storage. What should I do to avoid violating the golden rule of caching? Make up a canonical form, document it (e.g. all parameters are required after all and need to be sorted in a specific order) and return a client error unless the required form is met? Instead of an error, redirect permanently to the canonical form of a request? Or is it enough to not mind how the request looks like, and just respond with the same ETag for same responses?

    Read the article

  • How do I implement IEnumerable?

    - by Stacey
    I have a class that contains a static number of objects. This class needs to be frequently 'compared' to other classes that will be simple List objects. public partial class Sheet { public Item X{ get; set; } public Item Y{ get; set; } public Item Z{ get; set; } } the items are obviously not going to be "X" "Y" "Z", those are just generic names for example. The problem is that due to the nature of what needs to be done, a List won't work; even though everything in here is going to be of type Item. It is like a checklist of very specific things that has to be tested against in both code and runtime. This works all fine and well; it isn't my issue. My issue is iterating it. For instance I want to do the following... List<Item> UncheckedItems = // Repository Logic Here. UncheckedItems contains all available items; and the CheckedItems is the Sheet class instance. CheckedItems will contain items that were moved from Unchecked to Checked; however due to the nature of the storage system, items moved to Checked CANNOT be REMOVED from Unchecked. I simply want to iterate through "Checked" and remove anything from the list in Unchecked that is already in "Checked". So naturally, that would go like this with a normal list. foreach(Item item in Unchecked) { if( Checked.Contains(item) ) Unchecked.Remove( item ); } But since "Sheet" is not a 'List', I cannot do that. So I wanted to implement IEnumerable so that I could. Any suggestions? I've never implemented IEnumerable directly before and I'm pretty confused as to where to begin.

    Read the article

  • Variable Scoping in a method and its persistence in C++

    - by de costo
    Consider the following public method that adds an integer variable to a vector of ints(private member) in a class in C++. KoolMethod() { int x; x = 10; KoolList.Add(x); } Vector<int>KoolList; But is this a valid addition to a vector ??? Upon calling the method, it creates a local variable. The scope of this local variable ends the moment the execution control leaves the method. And since this local variable is allocated on a stack(on the method call), any member of KoolList points to an invalid memory location in deallocated stack which may or may not contain the expected value of x. Is this an accurate description of above mechanism ?? Is there a need for creating an int in heap storage using "new" operator everytime a value needs to be added to the vector like described below ????: KoolMethod() { int *x = new int(); *x = 10; KoolList.Add(x); } Vector<int*>KoolList;

    Read the article

  • SQL Server: Why use shorter VARCHAR(n) fields?

    - by chryss
    It is frequently advised to choose database field sizes to be as narrow as possible. I am wondering to what degree this applies to SQL Server 2005 VARCHAR columns: Storing 10-letter English words in a VARCHAR(255) field will not take up more storage than in a VARCHAR(10) field. Are there other reasons to restrict the size of VARCHAR fields to stick as closely as possible to the size of the data? I'm thinking of Performance: Is there an advantage to using a smaller n when selecting, filtering and sorting on the data? Memory, including on the application side (C++)? Style/validation: How important do you consider restricting colunm size to force non-sensical data imports to fail (such as 200-character surnames)? Anything else? Background: I help data integrators with the design of data flows into a database-backed system. They have to use an API that restricts their choice of data types. For character data, only VARCHAR(n) with n <= 255 is available; CHAR, NCHAR, NVARCHAR and TEXT are not. We're trying to lay down some "good practices" rules, and the question has come up if there is a real detriment to using VARCHAR(255) even for data where real maximum sizes will never exceed 30 bytes or so. Typical data volumes for one table are 1-10 Mio records with up to 150 attributes. Query performance (SELECT, with frequently extensive WHERE clauses) and application-side retrieval performance are paramount.

    Read the article

  • Getting my string value from my form into my class( not another form)

    - by jovany
    Hello all, I have a question regarding the some data which is being transfered from one form to my class. It's not going quite the way i'd like to , so I figured maybe there is someone who could help me. This is my code in my class Public Class DrawableTextBox Inherits Drawable Dim i_testString As Integer Private s_InsertLabel As String Private drawFont As Font Public Sub New(ByVal fore_color As Color, ByVal fill_color As Color, Optional ByVal line_width As Integer = 0, Optional ByVal new_x1 As Integer = 0, Optional ByVal new_y1 As Integer = 0, Optional ByVal new_x2 As Integer = 1, Optional ByVal new_y2 As Integer = 1) MyBase.New(fore_color, fill_color, line_width) X1 = new_x1 Y1 = new_y1 X2 = new_x2 Y2 = new_y2 Trace.WriteLine(s_InsertLabel) End Sub Friend WriteOnly Property _textBox() As String Set(ByVal Value As String) s_InsertLabel = Value Trace.WriteLine(s_InsertLabel) End Set End Property ' Draw the object on this Graphics surface. Public Overrides Sub Draw(ByVal gr As System.Drawing.Graphics) ' Make a Rectangle representing this rectangle. Dim rect As Rectangle = GetBounds() ' Fill the rectangle as usual. Dim fill_brush As New SolidBrush(FillColor) gr.FillRectangle(fill_brush, rect) fill_brush.Dispose() ' See if we're selected. If IsSelected Then ' Draw the rectangle highlighted. Dim highlight_pen As New Pen(Color.Yellow, LineWidth) gr.DrawRectangle(highlight_pen, rect) highlight_pen.Dispose() ' Draw grab handles. Trace.WriteLine("drawing the lines for my textbox") DrawGrabHandle(gr, X1, Y1) DrawGrabHandle(gr, X1, Y2) DrawGrabHandle(gr, X2, Y2) DrawGrabHandle(gr, X2, Y1) Else 'TextBox() Dim fg_pen As New Pen(Color.Red, LineWidth) 'Dim fontSize As Single = 0.1 + ((Y2 - Y1) / 2) Dim fontSize As Single = 20 Try Dim drawFont As New Font("Arial", fontSize, FontStyle.Bold) Trace.WriteLine(s_InsertLabel) gr.DrawString(s_InsertLabel, drawFont, Brushes.Brown, X1, Y1) Catch ex As ArgumentException End Try gr.DrawRectangle(Pens.Azure, rect) ' gr.DrawRectangle(fg_pen, rect) fg_pen.Dispose() End If End Sub Public Function GetValueString(ByVal ValueType As String) Return ValueType End Function ' Return the object's bounding rectangle. Public Overrides Function GetBounds() As System.Drawing.Rectangle Return New Rectangle( _ Min(X1, X2), _ Min(Y1, Y2), _ Abs(100), _ Abs(30)) Trace.WriteLine("don't forget to make variables in GetBounds DrawableTextbox") End Function ' Return True if this point is on the object. Public Overrides Function IsAt(ByVal x As Integer, ByVal y As Integer) As Boolean Return (x >= Min(X1, X2)) AndAlso _ (x <= Max(X1, X2)) AndAlso _ (y >= Min(Y1, Y2)) AndAlso _ (y <= Max(Y1, Y2)) End Function ' Move the second point. Public Overrides Sub NewPoint(ByVal x As Integer, ByVal y As Integer) X2 = x Y2 = y End Sub ' Return True if the object is empty (e.g. a zero-length line). Public Overrides Function IsEmpty() As Boolean Return (X1 = X2) AndAlso (Y1 = Y2) End Function End Class I've got a form with a textbox( form1) in which the text is being inserted and passed through a buttonclick (al via properties). As you can see I've placed several traces and in the property of the class my trace works fine , however if I look in my Draw function it is already gone. And I get a blank trace. Does anyone know what's happening here. thanks in advance. (forgive me I'm new )

    Read the article

  • BIOS upgrade lowers CPU temperature

    - by N.N.
    Setup I've got a system with an Asus P8Z68-V PRO motherboard and an Intel Core i7-2600K CPU running at stock speed (no overlocking) which I cool with a Noctua NH-U12P. On the heatsink I've got the two included fans connected via the included Low-Noise Adapters (L.N.A.) 1100 RPM, 16.9 dB(A). In the BIOS settings I've set the CPU and chassis fan profile to silent. Issue Yesterday I upgraded from BIOS version 0501 to 0606. After the upgrade I checked the temperatures in the BIOS monitor and was surprised to see that the CPU temperature was slightly ~30°C. Before the upgrade the CPU temperature was ~50°C with the same BIOS settings (see the following heading for details on temperatures). How can this be? It seems a bit odd that a BIOS upgrade can lower the CPU temperature by 20°C and it also seems odd that the CPU temperature is lower than the chassis temperature. Temperatures When I've checked temperatures the room temperature has been ~23°C. I haven't changed the placement of the computer nor the hardware or cooling setup between BIOS versions. BIOS version 0501 BIOS monitor: CPU: ~50°C Chassis: ~33°C I haven't got any temperature measures from lm-sensors or the like for version 0501 because I only discovered the issue after upgrading to version 0606 and the BIOS updater utility won't let me downgrade to version 0501 (it says "outdated image" when I try to load version 0501). BIOS version 0606 BIOS monitor: CPU: ~30°C Chassis: ~33°C lm-sensors in Ubuntu 11.04 Desktop 64-bit (sudo sensors after an uptime of 4 h 52 min and a load average of 0.22, 0.18, 0.15): coretemp-isa-0000 Adapter: ISA adapter Core 0: +32.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0001 Adapter: ISA adapter Core 1: +35.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0002 Adapter: ISA adapter Core 2: +29.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0003 Adapter: ISA adapter Core 3: +36.0°C (high = +80.0°C, crit = +98.0°C) The BIOS monitor temperatures was checked directly after the lm-sensors temperatures was checked. BIOS version 0706, 0801, 1101 and 3203 I get the same kind of temperatures both in the BIOS monitor and with lm-sensors in BIOS version 0706, 0801, 1101 and 3203 as in 0606. Information from Asus The 0606 changelog mentions nothing explicitly about CPU temperature (but item 3., as indicated by sidran32, might affect temperatures): P8Z68-V PRO 0606 BIOS with IRST 10.6.0.1002 Enable the support of Intel Rapid Storage Technology version 10.6.0.1002 Release Improve DRAM compatibility Improve System stability Improve compatibility with some Raid card model Increase IGD share memory size to 512MB However the following FAQ might give a hint: FAQs I find that the CPU temperature reading in BIOS is about 10~20 degrees centigrade hotter than the reading in OS. Is it normal? Page Tools Solution That is normal as BIOS does not send idle command to the CPU, making most of the power saving features useless. You should be getting similar reading if you disable EIST/C1E/CPU C3 Report/CPU C6 Report in BIOS.

    Read the article

  • Simple Backup Strategy for Amazon EC2 instances / volumes?

    - by minerj
    You have entered Introductory Backups for Amazon EC2 EBS-backed Windows Images 010... I have been browsing my brains out to find a simple backup strategy for our single windows 2008 server running SharePoint Services. This is an EBS-backed image of one server with one data volume. I don’t need anything exotic. I only need a “daily” backup (losing a day’s worth of data is not catastrophic). We have created and saved an EBS backed AMI image (Windows 2008) we are comfortable using. We started off making backups by simply creating a new EBS AMI image. This is really simple, but the running server is put offline during the first 10 – 15 minutes of creating the image – not ideal. The standard way of creating backups would seem to be creating snapshots of volumes attached to a running instance. Again it’s pretty simple and the server remains usable during the snapshot generation. The apparent Catch-22 is that you can’t simply launch a new instance directly from a snapshot. I know how to bundle a running instance to S3 storage and then register the AMI from the S3 bucket. This does allow me to capture a backup of a running instance and, if the running instance is lost, register the AMI from the S3 bucket and launch the new AMI to recover the instance, but this seems really convoluted and it seems ridiculous to have to juggle back and forth between the AWS Console and the S3 Organizer plug-in for Firefox to get this accomplished. (Please don't mention the command line approach, this is an 010 level course). From playing around with EBS-backed images, the following approach appears to work for me (all done within the AWS Console): 1.For your backups, simply snapshot the system volume (/dev/sda1) as needed. 2.If you lose your running instance, do the following: a.Create a new volume from your last snapshot backup b.Launch another instance of your starting AMI (must be EBS-backed) c.Stop this instance. d.Detach the existing system volume from the new stopped instance and discard. e.Attach the newly created volume as system volume (/dev/sda1) to the stopped instance. f.Re-start the new instance. I have tested this out a couple of times and it seems to work for me. Question: Is there anything wrong with this approach?

    Read the article

  • vSphere Client vCenter Template Customization Specification Using Windows Sysprep Unattended Answer XML File

    - by Brian
    I'm trying to setup a vSphere Client vCenter v5.0.0 Build 455964 Template Customization Specification using a Windows Sysprep unattended answer XML file for Win2008R2. However I didn't know how Sysprep worked before attempting this so it was a time-consuming nightmare (even after reviewing VMware vSphere ESXi 5's documentation)! I think I've figure out what I'm supposed to be doing, but it's still not working. The biggest problem at this point is that vSphere Client vCenter Customization Specification IP address information is not sticking when I load a Sysprep XML file with just 1 basic setting! This can only be a bug. Here is the process I'm using: PROCESS for Windows - vSphere Client Install Windows OS install VM Tools customize Windows (GPOs can be used to do this after deployment) install Applications (GPOs can be used to do this after deployment too) shutdown the VM convert the VM to a template create a custom Windows Sysprep XML answer file with desired customizations View Management Customization Specifications Manager create "New" Specification for "Target Virtual Machine OS" select Windows check "Use Custom Sysprep Answer File" (ADDS: Custom Sysprep File. KEEPS: Network (IP), Operating System Options (SID, Sysprep /generalize). REPLACES: Registration Information of Owner Name & Organization, Computer Name, Windows License (Key), Administrator Password, Time Zone, Run Once, Workgroup or Domain) name it as "VMwareCS-OS####R#x32/64w/Sysprep-TEST" (CS=Customization Specification) set Description as "Created YYYY/MM/DD by FLast" NEXT import a Sysprep answer file from secure location NEXT Custom settings NEXT click "..." box to right of "Use DHCP" set "Use the following IP settings:" for "IP Address" fill out the first 2 octets set appropriate values for other 2-3 fields set DNS server addresses OK NEXT check "Generate New Security ID (SID)" ALWAYS as template is likely a domain-member computer so it can be updated occasionally NEXT Finish View Inventory VMs and Templates right-click previously completed template Deploy Virtual Machine from this Template provide the new OS name (max15char) select inventory location NEXT select Host/Cluster (wait for validation to succeed) NEXT select Resource Pool (wait for validation to succeed) NEXT select Storage location NEXT check "Power on this virtual machine after creation" select "Customize using an existing customization specification" select desired specification select "Use the Customization Wizard to temporarily adjust the specification before deployment" NEXT NEXT Custom settings? NEXT check "Generate New Security ID (SID)" ALWAYS as template is likely a domain-member computer so it can be updated occasionally NEXT Finish Finish. I know a community member named "brian" (http://serverfault.com/users/25904/brian) has worked with this scenario before, but I couldn't figure out how to contact him directly, so Brian if you see this message could you provide some information to help? Thanks, Brian

    Read the article

< Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >