Search Results

Search found 25790 results on 1032 pages for 'information theory'.

Page 298/1032 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • Prerequisites for Account management via an IPhone App?

    - by Icky
    Hello. I have been reading a couple of threads for this topic on this site. I want to create an App, which communicates with a server and has the following features: the User can create/manage an account on the server the App communicates with the server via a secure connection the User is updated about important news through messages From what I understood so far, I need to take care of the following: establish a secure connection with the server send account information(user data, password) to the server and authenticate the client side management and encryption of account data/information is handled by the server, so the App only sends data, the server stores/encrypts (no need for me to take care of anything) So far, I think, I have covered the most important features. I have read, that NSURLConnection can be used, to send the authentication data. But how is further communication ensured? And how is the encryption managed? Are there any useful tutorials on this, because this is the first time I delve into this topic, and any guidance is greatly appreciated! Also, if I have missed anything important (e.g. with managing accounts) please tell me.

    Read the article

  • PHP, We have sessions, and cookies....I love cookies, but they are blowing my mind right now.

    - by Matt
    I am not sure how to go about accessing the variable I need to set on a cookie... I was thinking about using the $_POST global but I dont know how based on my design if it will work. I am using a master page type design seperating index.php from my function includes and database information and individual pages (that will be returned to an include in index.php based on a $_GET) Okay so back to my question. What is the most efficient way to set a cookie on a design that has a main page that everything will branch from. How would I pull the value. Is $_POST a good enough way to go about it? Also...by saying it must be the first thing sent...does that mean I cannot run any serverside scripts before that? I could definately utilize a login query I think but I dont want to write code just to be dissapointed based on my lack of time and knowledge. I did search for answers...I know this most likely feels like a generic question that could be answered in a difference place...but I know I will get an accurate and professional answer here...so I dont want to bet on the half answers I found otherwise. Of course I will sanitize everything and not store any sensitive information (passwords,address,phone,or anything really for that matter besides some kind of session ID and the username) If this is confusing I am sorry but I am on a gov computer...and they lock these tighter than ft knox...so getting my code on here will be a chore until I get back to my room. Thanks, Matt

    Read the article

  • Is there a way to effect user defined data types in MySQL?

    - by Dancrumb
    I have a database which stores (among other things), the following pieces of information: Hardware IDs BIGINTs Storage Capacities BIGINTs Hardware Names VARCHARs World Wide Port Names VARCHARs I'd like to be able to capture a more refined definition of these datatypes. For instance, the hardware IDs have no numerical significance, so I don't care how they are formatted when displayed. The Storage Capacities, however, are cardinal numbers and, at a user's request, I'd like to present them with thousands and decimal separators, e.g. 123,456.789. Thus, I'd like to refine BIGINT into, say ID_NUMBER and CARDINAL. The same with Hardware Names, which are simple text and WWPNs, which are hexstrings, e.g. 24:68:AC:E0. Thus, I'd like to refine VARCHAR into ENGLISH_WORD and HEXSTRING. The specific datatypes I made up are just for illustrative purposes. I'd like to keep all this information in one place and I'm wondering if anybody knows of a good way to hold this all in my MySQL table definitions. I could use the Comment field of the table definition, but that smells fishy to me. One approach would be to define the data structure elsewhere and use that definition to generate my CREATE TABLEs, but that would be a major rework of the code that I currently have, so I'm looking for alternatives. Any suggestions? The application language in use is Perl, if that helps.

    Read the article

  • Cannot connect to database,.Login failed for user

    - by kishorejangid
    Yesterday my database was connected perfectly.. We have installed Windows Server 2008 R2 on our server and the i have added the user to the client PC name SMTECH5 with user jangid Now my database is not connecting using windows authentication. only the master database is connecting.. here are somw of the error throwned TITLE: Connect to Database Engine Cannot connect to SMTECH5\COLLEGEERP. ADDITIONAL INFORMATION: Cannot open user default database. Login failed. Login failed for user 'SMTECH\jangid'. (Microsoft SQL Server, Error: 4064) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=4064&LinkId=20476 BUTTONS: OK TITLE: Connect to Database Engine Cannot connect to SMTECH5\COLLEGEERP. ADDITIONAL INFORMATION: Cannot open user default database. Login failed. Login failed for user 'SMTECH\jangid'. (Microsoft SQL Server, Error: 4064) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=4064&LinkId=20476 BUTTONS: OK this happens on all the pcs in our lab

    Read the article

  • HTML onmouseover doesn't work to show up text

    - by REALFREE
    I'm trying to show some information on a picture when onmouseover/onmouseout event occurs. What I want to achieve is something like this website does on top selling. My code is like this: <div class="container" onmouseover="$('#info').css('display','block');" onmouseout="$('#info').css('display','none');"> <img src=..."> <div id="info" style="display:none"> ... some text ... </div> </div> So div info block is initially hidden, but when mouse is on a picture I want the information to appear on corresponding picture (with tint background on the image to see text well). But somewhat it doesn't work. I appreciate any suggestion how to approach this problem. Edit: more precisely, the reason why I choose to use inline because I need to show/hide text corresponding to the image(contain unique div id)that user put their mouse on/out. That means I have many div container and each container has unique div id.

    Read the article

  • Windows Forms Event Log

    - by blu
    I am writing to the event log from my Windows Forms application running on Windows 7 and am getting this message in the event log: The description for Event ID X from source Application cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer. If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event: Exception Details the message resource is present but the message is not found in the string/message table My logging code is: public void Log(Exception exc) { EventLog.WriteEntry( "Application", exc.ToString(), EventLogEntryType.Error, 100); } My logging on Windows Forms is usually to a DB, but in this case decided to use the event log. I usually use the event log in ASP.NET applications, but those are on XP Pro locally and Windows Server 2003 on the web boxes. Is this a Windows 7 thing or a Windows Forms thing, and what should I do to fix this? Thanks.

    Read the article

  • Accessing the selected element inside a templated textblock bound to a wpf listbox

    - by black sensei
    Hello good people , i'm trying to achieve a functionality but i'm don't know how to start it. I'm using vs 2008 sp1 and i'm consuming a webservice which returns a collection (is contactInfo[]) that i bind to a ListBox with little datatemplate on it. <ListBox Margin="-146,-124,-143,-118.808" Name="contactListBox" MaxHeight="240" MaxWidth="300" MinHeight="240" MinWidth="300"> <ListBox.ItemTemplate> <DataTemplate> <TextBlock> <CheckBox Name="contactsCheck" Uid="{Binding fullName}" Checked="contacts_Checked" /><Label Content="{Binding fullName}" FontSize="15" FontWeight="Bold"/> <LineBreak/> <Label Content="{Binding mobile}" FontSize="10" FontStyle="Italic" Foreground="DimGray" /> <Label Content="{Binding email}" FontStyle="Italic" FontSize="10" Foreground="DimGray"/> </TextBlock> </DataTemplate> </ListBox.ItemTemplate> </ListBox> Every works fine so far. so When a checkbox is checked i'll like to access the information of the labels (either the) belonging to the same row or attached to it and append the information to a global variable for example (for each checkbox checked). My problem right now is that i don't know how to do that. Can any one shed some light on how to do that? if you notice Checked="contacts_Checked" that's where i planned to perform the operations. thanks for reading and helping out

    Read the article

  • Dynamically created iframe used to download file triggers onload with firebug but not without

    - by justkt
    EDIT: as this problem is now "solved" to the point of working, I am looking to have the information on why. For the fix, see my comment below. I have an web application which repeatedly downloads wav files dynamically (after a timeout or as instructed by the user) into an iframe in order to trigger the a default audio player to play them. The application targets only FF 2 or 3. In order to determine when the file is downloaded completely, I am hoping to use the window.onload handler for the iframe. Based on this stackoverflow.com answer I am creating a new iframe each time. As long as firebug is enabled on the browser using the application, everything works great. Without firebug, the onload never fires. The version of firebug is 1.3.1, while I've tested Firefox 2.0.0.19 and 3.0.7. Any ideas how I can get the onload from the iframe to reliably trigger when the wav file has downloaded? Or is there another way to signal the completion of the download? Here's the pertinent code: HTML (hidden's only attribute is display:none;): <div id="audioContainer" class="hidden"> </div> JavaScript (could also use jQuery, but innerHTML is faster than html() from what I've read): waitingForFile = true; // (declared at the beginning of closure) $("#loading").removeClass("hidden"); var content = "<iframe id='audioPlayer' name='audioPlayer' src='" + /path/to/file.wav + "' onload='notifyLoaded()'></iframe>"; document.getElementById("audioContainer").innerHTML = content; And the content of notifyLoaded: function notifyLoaded() { waitingForFile = false; // (declared at beginning of the closure) $("#loading").addClass("hidden"); } I have also tried creating the iframe via document.createElement, but I found the same behavior. The onload triggered each time with firebug enabled and never without it. EDIT: Fixed the information on how the iframe is being declared and added the callback function code. No, no console.log calls here.

    Read the article

  • Color Printer: Laser vs Inkjet

    - by Mike
    I am about to buy a color printer. I had a B&W Laserjet printer in the past but since then I've used inkjets for decades. I need a printer that can deliver high quality as these photo inkjet printers, but I'm tired of paying for ink that costs $9,000 per gallon (1 gallon = 3.785 liters = 300 cartridges = $9,000). So, I was thinking about buying a color laser printer, but I'm not sure these printers can deliver the same quality and are worth the investment in terms of toner consumption. I remembered that my old Laserjet printer was able to print 1100 pages per toner cartridge. The inkjet printers I have can print 500 pages per cartridge. Price by price, 2 inkjet cartridges have more or less the same cost as one toner cartridge and in theory prints almost the same. I am not sure if this is true for color lasers. What can you guys tell me about quality, toner cost and cost per page for laser or inkjet printer? Is it worth the change? (Keep in mind that an inkjet printer costs $50 and a laser printer costs $200.) Thanks.

    Read the article

  • Script errors when run by launchd at startup, but not when run in Terminal

    - by Mechcozmo
    Hello. I'm attempting to create a RAM disk that loads the previous contents when the system starts up, and every six hours writes the contents to a disk image. Currently, when you run the script from the terminal ("sudo bash LogToRAM.sh") everything works fine. But when run from launchd during startup, it doesn't work. Here's the lines from the log; the first line just gives some idea as to where in the boot process we are: SecurityAgent[202] Showing Login Window com.mechcozmo.LogToRAM[51] + /Developer/usr/bin/SetFile -a V /Volumes/LogfileRAMdisk com.mechcozmo.LogToRAM[51] ERROR: File Not Found. (-43) on file: /Volumes/LogfileRAMdisk com.mechcozmo.LogToRAM[51] + /usr/sbin/asr -source '/Library/Application Support/LogToRAM/RAMdisk_store.dmg' -target /Volumes/LogfileRAMdisk/ -noverify Here is the script and plist file in question. Note that 'set -vx' is up at the top of the script; it give a lot of information about what is happening in the script. My current theory is that the /Volumes directory does not exist at this stage of the boot process, but that seems unlikely to be honest.

    Read the article

  • Cannot connect Linux XAMPP PHP to MS SQL database.

    - by Jim
    I've searched many sites without success. I'm using XAMPP 1.7.3a on Ubuntu 9.1. I have used the methods found at http://www.webcheatsheet.com/PHP/connect_mssql_database.php, they all fail. I am able to "connect" with a linked database through MS Access, however, that is not an acceptable solution as not all users will have Access. The first method (at webcheatsheet) uses mssql_connect, et.al. but I get this error from the mssql_connect() call: Warning: mssql_connect() [function.mssql-connect]: Unable to connect to server: [my server] in [my code] [my server] is the server address, I have used both the host name and the IP address. [my code] is a reference to the file and line number in my .php file. Is there a log file somewhere that would have more information about the failure, both on my machine and the MS SQL server? We do not have a bona-fide DBA, so I will need specific information to pass on if the issue seems to be on the server side. All assistance is appreciated, including RTFM when the location of the M is provided! Thanks

    Read the article

  • Is this a File Header / Magic Number?

    - by Hammer Bro.
    I've got 120,000 files (way more, actually; this is just an arbitrary subset) of an unknown type. Linux file does not identify them (not that they're necessarily Linux files), nor do any other methods I've tried. There are only two hints about them that I currently have. One is that I suspect some compression is employed -- I have metadata that claims the file sizes are always some amount larger than what I observe. The other is that in 100,000 of these files, the first 16 bytes are always: ff ee ee dd 00 00 00 00 01 00 00 00 00 00 00 00 That really looks like a file header/magic number to me, but I just can't place it. Does anyone know what kind of files this would indicate? Alternatively, can anyone convince me that these suspiciously common bytes certainly do not indicate a specific file type? UPDATE I don't know the exact reverse-engineering details, but most of the files in our case are zips after the first 29(? or so) bytes are ignored. So in practice the problem is solved (we know how to process the files) but in theory the question is still unanswered -- I don't know which application routinely prepends about 29 bytes to its zips. [I'm not sure if I should leave the question open or not at this point.]

    Read the article

  • AStar in a specific case in C#

    - by KiTe
    Hello. To an intership, I have use the A* algorithm in the following case : the unit shape is a square of height and width of 1, we can travel from a zone represented by a rectangle from another, but we can't travel outside these predifined areas, we can go from a rectangle to another through a door, represented by a segment on corresponding square edge. Here are the 2 things I already did but which didn't satisfied my boss : 1 : I created the following classes : -a Door class which contains the location of the 2 separated squares and the door's orientation (top, left, bottom, right), -a Map class which contains a door list, a rectangle list representing the walkable areas and a 2D array representing the ground's squares (for additionnal infomations through an enumeration) - classes for the A* algorithm (node, AStar) 2 : -a MapCase class, which contains information about the case effect and doors through an enumeration (with [FLAGS] attribute set on, to be able to cummulate several information on each case) -a Map classes which only contains a 2D array of MapCase classes - the classes for the A* algorithm (still node an AStar). Since the 2 version is better than the first (less useless calculation, better map classes architecture), my boss is not still satisfied about my mapping classes architecture. The A* and node classes are good and easily mainainable, so I don't think I have to explain them deeper for now. So here is my asking : has somebody a good idea to implement the A* with the problem specification (rectangle walkable but with a square unit area, travelling through doors)? He said that a grid vision of the problem (so a 2D array) shouldn't be the correct way to solve the problem. I wish I've been clear while exposing my problem .. Thanks KiTe

    Read the article

  • Zero downtime uploads / Rollback in IIS

    - by NickatUship
    I'm not sure if this is the right way to ask this question, but here's basically what i'd like to do: 1.) Push a changeset to a site in IIS. 2.) Don't interrupt the users. 3.) Be able to roll back effortlessly. So, there are a few things that I know have to happen: 1.) Out of Proc session - handled 2.) Out of Proc cache - handled So the questions that remain: 1.) How do i keep from interrupting the users? If i just upload the files to bin, the app recycles and takes 10+ seconds to come back online 2.) How do i roll back effortlessly? I was thinking a possible solution would be to have two sites set up in IIS, one public and one private. Uploads go to private and get warmed up. After warmup, the sites are swapped. A rollback only entails swapping to private without an upload. This seems sound in theory, but Im not sure of the mechanics. Any ideas?

    Read the article

  • "User Friendly" .net compatible Regex/Text matching tools?

    - by Binary Worrier
    Currently in our software we provide a hook where we call a DLL built by our clients to parse information out of documents we are processing (the DLL takes in some text (or a file) and returns a list of name/value pairs). e.g. We're given a Word doc or Text file to Archive. We do various things to the file, and call a DLL that will return "pertinent" information about the file. Among other things we store that "pertinent" data for posterity. What is considered "pertinent" depends on the client and the type of the document, we don't care, we get it and store it. I've been asked to develop a user friendly "something" that will allow a non-programmer user to "configure" how to get this data from a plain text document (<humor>The user story ends with the helpful suggestion/query "We could use regex for this?"</humor>) It's safe to assume that a list of regex's isn't going to cut this, I've written some of these parsers for customers, the regex's to do these would be hedious and some of them can't be done by regex's. Also one of the requirements above is "user friendly" which negates anything that has users seeing or editing regex expressions. As you can guess, I don't have a fortune of time to do this, and am wondering is there anything out there that I can plug in to our app that has a nice front end and does exactly what I need? :) No? Whadda mean no! . . . sigh Ok then failing that, anything out there that "visually" builds regex's and/or other pattern matching expressions, and then allows one to run those expressions against some text? The MS BRE will do what I want, but I need something prettier that looks less like code. Thanks guys,

    Read the article

  • Google Chrome Extension - Help needed

    - by Jim-Y
    Im new on Google Chrome Extensions coding, and i have some basic questions. I want to make a Chrome Extension, and the scheme is the following: -a popup window, containing buttons and result fields (popup.html) -when a button is clicked, i want to trigger an event, this event should connect to a webserver (i make the servlet too), and gather information from the server. (XMLHttpRequest()) -after that, i want my extension to load the gathered information into one of the result fields. Simple, isn't it? But i have several problems, right at the beginning:( I started developing with reading tutorials, but i have fog on the main structure of an extension. Now, i started an app, containing a popup.html, manifest.json ... In popup.html theres a result field, and a button <div id="extension_container"> <div id="header"> <p id="intro">Result here</p> <button type="button" id="button">Click Me!</button> </div> <!-- END header --> <div id="content"> </div> <!-- END content --> When button is clicked, i trigger an event, handeled with jquery, code here: <script> $(document).ready(function(){ $("#button").click(function(){ $("#intro").text("Hello, im added"); alert("Clicked"); }); }); </script> And here comes the problem, in popup.html this doesnt work, if i load it to Chrome, nothing happens. Otherwise, if i open popup.html in browser, not as an extension, everything works fine. So, i think i have basic misunderstandings on extension structures, starting with background pages, background javascript and so on.. :( Could anyone help me?

    Read the article

  • How do I create Twitter style URL's for my app - Using existing application or app redesign - Ruby o

    - by bgadoci
    I have developed a blog application of sorts that I am trying to allow other users to take advantage of (for free and mostly for family). I wondering if the authentication I have set up will allow for such a thing. Here is the scenario. Currently the application allows for users to sign up for an account and when they do so they can create blog posts and organize those posts via tags. The application displays no data publicly (another words, you have to login to see anything). To gain access you have to create an account and even after you do, you cannot see anyone else's information as the applications filters using the current_user method and displays in the /posts/index.html.erb page. This would be great if a user only wanted to blog and share it with themselves, not really what I am looking for. My question has two parts (hopefully I won't make anyone mad by not putting these into two questions) Is it possible for a particular users data to live at www.myapplication.com/user without moving everything to the /user/show.html.erb file? Is it possible to make some of that information (living at the URL) public but still require login for create and destroy actions. Essentially, exactly like twitter. I am just curious if I can get from where I am (using the current_user methods across controllers to display in /posts/index.html.erb) to where I want to be. My fear is that I have to redesign the app such that the user data lives in the /user/show.html.erb page. Thoughts?

    Read the article

  • How do I create Twitter style URLs for my app - Using existing application or app redesign - Ruby on

    - by bgadoci
    I have developed a blog application of sorts that I am trying to allow other users to take advantage of (for free and mostly for family). I wondering if the authentication I have set up will allow for such a thing. Here is the scenario. Currently the application allows for users to sign up for an account and when they do so they can create blog posts and organize those posts via tags. The application displays no data publicly (another words, you have to login to see anything). To gain access you have to create an account and even after you do, you cannot see anyone else's information as the applications filters using the current_user method and displays in the /posts/index.html.erb page. This would be great if a user only wanted to blog and share it with themselves, not really what I am looking for. My question has two parts (hopefully I won't make anyone mad by not putting these into two questions) Is it possible for a particular users data to live at www.myapplication.com/user without moving everything to the /user/show.html.erb file? Is it possible to make some of that information (living at the URL) public but still require login for create and destroy actions. Essentially, exactly like twitter. I am just curious if I can get from where I am (using the current_user methods across controllers to display in /posts/index.html.erb) to where I want to be. My fear is that I have to redesign the app such that the user data lives in the /user/show.html.erb page. Thoughts? UPDATE: I am using Clearance for authentication by Thoughtbot. I wonder if there is something I can set in the vendored gem path to represent the /posts/index.html.erb code as the /user/id code and replace id with the user name.

    Read the article

  • Ruby Passeger + Nginx or lighthttpi + fgci for shared hosting

    - by devnull
    I have set up a passenger + nginx setup and I plan to offer a free non-commercial hosting (or in fact on the fly deployment) for rack-based frameworks (e.g. camping, sinatra). I am facing an "issue" with passenger. For each application you need to configure nginx.conf (it would be the same with apache so it is not an nginx issue) with: server { ... passenger_base_uri /app1; passenger_base_uri /app2; passenger_base_uri /app3; } Now this is not inherently bad as, in theory, I could allow a user to run just one app on his webspace but even in this case I need to create a new server directory on nginx e.g. (user.domain.com). As this will mainly be used to deploy apps the behavior I am looking at is more the possibility to auto map several apps (e.g. app1, app2, app3, app4) under the same server (your app.com/app1 yourapp.com/app2) without having to update the nginx or apache file each time. This seems to be a limitation in passenger. As such I am thinking about an alternative with lighttpd and fastcgi. Would this allow immediate deployment without touching the lighttpd config file e.g. I create a new directory with app2 and it will run immediately ? What is your experience in performance difference between passenger + nginx vs. lighttpd + fastcgi ? thanks in advance scenario details: on nginx + passenger - user cannot add a new sub-folder and run another sinatra/camping app without declaring the path on nginx.conf and restarting the server; wished behavior with the new setup: - user can add a new folder with a new app and it would run on lighttpd+fcgi without any extra configuration of the web server;

    Read the article

  • Informational messages returned with WCF involved

    - by DT
    This question is about “informational messages” and having them flow from a “back end” to a “front end” in a consistent manner. The quick question is “how do you do it”? Background: Web application using WCF to call back end services. In the back end service a “message” may occur. Now, the reason for this “message” may be a number of reasons, but for this discussion let’s assume that a piece of data was looked at and it was determined that the caller should be given back some information regarding it. This “informational” message may occur during a save and also may occur during retrieval of information. Again, the message is not what is important here, but the fact that there is some informational messages to give back under a number of different scenarios. From a team perspective we all want to return these “messages” in a standard way all of the time. Now, in the past this “standard way” has been done different ways by different people. Here are some possibilities: 1) Every operation has a “ref” parameter at the end that contains these messages 2) Every method returns these messages… however, this only kind of works for “Save” methods as one would think that “Retrieve” methods should return actual data and not messages 3) Some approach using the call context so as to not "pollute" all message signatures with something; however, with WCF in the picture this complicates things. That is, going back to the messages go on a header? Question: Back to my question then… how are others returning “messages” such as what was described above back through tiers of an application, over WCF and back to the caller?

    Read the article

  • Use web.sitemap to control page access

    - by Jakob Gade
    I was setting up permissions for pages in a ASP.NET website with <location> tags in web.config, something similar to this: <location path="Users.aspx"> <system.web> <authorization> <allow roles="Administrator"/> <deny users="*"/> </authorization> </system.web> </location> However, I also have a web.sitemap which basically contains the same information, i.e. which user roles can see/access which pages. A snippet from my web.sitemap: <?xml version="1.0" encoding="utf-8" ?> <siteMap xmlns="http://schemas.microsoft.com/AspNet/SiteMap-File-1.0" > <siteMapNode title="Home"> ... lots of nodes here ... <siteMapNode url="users.aspx" roles="Administrator" title="users" description="Edit users" /> ... </siteMapNode> </siteMap> Is there some kind of nifty way of using web.sitemap only to configure access? The <location> tags are quite verbose, and I don't like having to duplicate this information.

    Read the article

  • How to best show progress info when using ADO.NET?

    - by Yadyn
    I want to show the user detailed progress information when performing potentially lengthy database operations. Specifically, when inserting/updating data that may be on the order of hundreds of KB or MB. Currently, I'm using in-memory DataTables and DataRows which are then synced with the database via TableAdapter.Update calls. This works fine and dandy, but the single call leaves little opportunity to glean any kind of progress info to show to the user. I have no idea how much data is passing through the network to the remote DB or its progress. Basically, all I know is when Update returns and it is assumed complete (barring any errors or exceptions). But this means all I can show is 0% and then a pause and then 100%. I can count the number of rows, even going so far to cound how many are actually Modified or Added, and I could even maybe calculate per DataRow its estimated size based on the datatype of each column, using sizeof for value types like int and checking length for things like strings or byte arrays. With that, I could probably determine, before updating, an estimated total transfer size, but I'm still stuck without any progress info once Update is called on the TableAdapter. Am I stuck just using an indeterminate progress bar or mouse waiting cursor? Would I need to radically change our data access layer to be able to hook into this kind of information? Even if I can't get it down to the precise KB transferred (like a web browser file download progress bar), could I at least know when each DataRow/DataTable finishes or something? How do you best show this kind of progress info using ADO.NET?

    Read the article

  • postgresql table for storing automation test results

    - by Martin
    I am building an automation test suite which is running on multiple machines, all reporting their status to a postgresql database. We will run a number of automated tests for which we will store the following information: test ID (a GUID) test name test description status (running, done, waiting to be run) progress (%) start time of test end time of test test result latest screenshot of the running test (updated every 30 seconds) The number of tests isn't huge (say a few thousands) and each machine (say, 50 of them) have a service which checks the database and figures out if it's time to start a new automated test on that machine. How should I organize my SQL table to store all the information? Is a single table with a column per attribute the way to go? If in the future I need to add attributes but want to keep compatibility with old database format (ie I may not want to delete and create a new table with more columns), how should I proceed? Should the new attributes just be in a different table? I'm also thinking of replicating the database. In case of failure, I don't mind if the latest screenshots aren't backed up on the slave database. Should I just store the screenshots in its own table to simplify the replication? Thanks!

    Read the article

  • What is the fastest, most efficient way to get up to speed on a new technology?

    - by SLC
    My current job involves working with a huge number of technologies, most of which are very niche and unheard of. In some cases I have to write something about the technology, or with the technology, such as some lessons, examples, or tutorials, on behalf of the developer of that technology or someone that is backing it. When I get told to learn about a new technology, my first port of call is to check our internal library, and then look on amazon for a book on the subject. Failing that, or if the project is too small to warrant a purchase, I hit up google and youtube. However the results of randomly googling what I want to learn are hit and miss. Some days, I can find everything I want to know in a series of lessons or videos, and it's no problem. Other times, I can find almost nothing, and I really have to piece together things from sites. The result is that there are various resources out there, videos, interactive lessons, tutorials, books etc. but when I need to learn something fast, I often don't know the best way to go about it. It's not about fun, because I don't always have the luxury of working my way through a 600 page textbook named "A Complete Guide To Technology X", I have to deliver results quickly. One of the examples I'd like to use is ASP.NET MVC 2 which is something I have been told to learn. I grabbed a book on MVC 1 to refresh my knowledge, but googling it does't produce much useful information. I've seen a ton of ScottGu's tutorials on it, but they are mostly feature presentations, and some date back almost a year. The same applies to channel 9 and there are no books out yet on amazon. My question therefore has two parts, the first asks, "Where are the best places to look to get the information needed to learn a new technology?" and the second asks "What is the most efficient way to use such resources to learn a new technology?"

    Read the article

  • How to estimate size of data to transfer when using DbCommand.ExecuteXXX?

    - by Yadyn
    I want to show the user detailed progress information when performing potentially lengthy database operations. Specifically, when inserting/updating data that may be on the order of hundreds of KB or MB. Currently, I'm using in-memory DataTables and DataRows which are then synced with the database via TableAdapter.Update calls. This works fine and dandy, but the single call leaves little opportunity to glean any kind of progress info to show to the user. I have no idea how much data is passing through the network to the remote DB or its progress. Basically, all I know is when Update returns and it is assumed complete (barring any errors or exceptions). But this means all I can show is 0% and then a pause and then 100%. I can count the number of rows, even going so far to cound how many are actually Modified or Added, and I could even maybe calculate per DataRow its estimated size based on the datatype of each column, using sizeof for value types like int and checking length for things like strings or byte arrays. With that, I could probably determine, before updating, an estimated total transfer size, but I'm still stuck without any progress info once Update is called on the TableAdapter. Am I stuck just using an indeterminate progress bar or mouse waiting cursor? Would I need to radically change our data access layer to be able to hook into this kind of information? Even if I can't get it down to the precise KB transferred (like a web browser file download progress bar), could I at least know when each DataRow/DataTable finishes or something? How do you best show this kind of progress info using ADO.NET?

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >