Search Results

Search found 16838 results on 674 pages for 'writing patterns dita cms'.

Page 399/674 | < Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >

  • Are their any suggestions for this new assembly language?

    - by Noctis Skytower
    Greetings! Last semester in college, my teacher in the Computer Languages class taught us the esoteric language named Whitespace. In the interest of learning the language better with a very busy schedule (midterms), I wrote an interpreter and assembler in Python. An assembly language was designed to facilitate writing programs easily, and a sample program was written with the given assembly mnemonics. Now that it is summer, a new project has begun with the objective being to rewrite the interpreter and assembler for Whitespace 0.3, with further developments coming afterwards. Since there is so much extra time than before to work on its design, you are presented here with an outline that provides a revised set of mnemonics for the assembly language. This post is marked as a wiki for their discussion. Have you ever had any experience with assembly languages in the past? Were there some instructions that you thought should have been renamed to something different? Did you find yourself thinking outside the box and with a different paradigm than in which the mnemonics were named? If you can answer yes to any of those questions, you are most welcome here. Subjective answers are appreciated! Stack Manipulation (IMP: [Space]) Stack manipulation is one of the more common operations, hence the shortness of the IMP [Space]. There are four stack instructions. hold N Push the number onto the stack copy Duplicate the top item on the stack copy N Copy the nth item on the stack (given by the argument) onto the top of the stack swap Swap the top two items on the stack drop Discard the top item on the stack drop N Slide n items off the stack, keeping the top item Arithmetic (IMP: [Tab][Space]) Arithmetic commands operate on the top two items on the stack, and replace them with the result of the operation. The first item pushed is considered to be left of the operator. add Addition sub Subtraction mul Multiplication div Integer Division mod Modulo Heap Access (IMP: [Tab][Tab]) Heap access commands look at the stack to find the address of items to be stored or retrieved. To store an item, push the address then the value and run the store command. To retrieve an item, push the address and run the retrieve command, which will place the value stored in the location at the top of the stack. save Store load Retrieve Flow Control (IMP: [LF]) Flow control operations are also common. Subroutines are marked by labels, as well as the targets of conditional and unconditional jumps, by which loops can be implemented. Programs must be ended by means of [LF][LF][LF] so that the interpreter can exit cleanly. L: Mark a location in the program call L Call a subroutine goto L Jump unconditionally to a label if=0 L Jump to a label if the top of the stack is zero if<0 L Jump to a label if the top of the stack is negative return End a subroutine and transfer control back to the caller exit End the program I/O (IMP: [Tab][LF]) Finally, we need to be able to interact with the user. There are IO instructions for reading and writing numbers and individual characters. With these, string manipulation routines can be written. The read instructions take the heap address in which to store the result from the top of the stack. print chr Output the character at the top of the stack print int Output the number at the top of the stack input chr Read a character and place it in the location given by the top of the stack input int Read a number and place it in the location given by the top of the stack Question: How would you redesign, rewrite, or rename the previous mnemonics and for what reasons?

    Read the article

  • Delphi7 - How can i copy a file that is being written to

    - by Simon
    I have an application that logs information to a daily text file every second on a master PC. A Slave PC on the network using the same application would like to copy this text file to its local drive. I can see there is going to be file access issues. These files should be no larger than 30-40MB each. the network will be 100MB ethernet. I can see there is potential for the copying process to take longer than 1 second meaning the logging PC will need to open the file for writing while it is being read. What is the best method for the file writing(logging) and file copying procedures? I know there is the standard Windows CopyFile() procedure, however this has given me file access problems. There is also TFileStream using the fmShareDenyNone flag, but this also very occasionally gives me an access problem too (like 1 per week). What is this the best way of accomplishing this task? My current File Logging: procedure FSWriteline(Filename,Header,s : String); var LogFile : TFileStream; line : String; begin if not FileExists(filename) then begin LogFile := TFileStream.Create(FileName, fmCreate or fmShareDenyNone); try LogFile.Seek(0,soFromEnd); line := Header + #13#10; LogFile.Write(line[1],Length(line)); line := s + #13#10; LogFile.Write(line[1],Length(line)); finally logfile.Free; end; end else begin line := s + #13#10; Logfile:=tfilestream.Create(Filename,fmOpenWrite or fmShareDenyNone); try logfile.Seek(0,soFromEnd); Logfile.Write(line[1], length(line)); finally Logfile.free; end; end; end; My file copy procedure: procedure DoCopy(infile, Outfile : String); begin ForceDirectories(ExtractFilePath(outfile)); //ensure folder exists if FileAge(inFile) = FileAge(OutFile) then Exit; //they are the same modified time try { Open existing destination } fo := TFileStream.Create(Outfile, fmOpenReadWrite or fmShareDenyNone); fo.Position := 0; except { otherwise Create destination } fo := TFileStream.Create(OutFile, fmCreate or fmShareDenyNone); end; try { open source } fi := TFileStream.Create(InFile, fmOpenRead or fmShareDenyNone); try cnt:= 0; fi.Position := cnt; max := fi.Size; {start copying } Repeat dod := BLOCKSIZE; // Block size if cnt+dod>max then dod := max-cnt; if dod>0 then did := fo.CopyFrom(fi, dod); cnt:=cnt+did; Percent := Round(Cnt/Max*100); until (dod=0) finally fi.free; end; finally fo.free; end; end;

    Read the article

  • Perl not closing TCP sockets if clients are no longer connected?

    - by LM
    The purpose of the application is to listen for a specific UDP multicast and then to forward the data to any TCP clients connected to the server. The code works fine, but I have a problem with the sockets not closing after the TCP clients disconnects. A socketsniffer utility shows the the sockets remain open and all the UDP data continues to be forwarded to the clients. The problem I believe is with the "if ($write-connected())" block as it always return true, even if the TCP client is no longer connected. I use standard Windows Telnet to connect to the server and to see the data. When I close telnet, the TCP socket is suppose to close on the server. Any reason why connected() show the connections as active even if they are not? Also, what alternative should I use then? Code: #!/usr/bin/perl use IO::Socket::Multicast; use IO::Socket; use IO::Select; my $tcp_port = "4550"; my $tcp_socket = IO::Socket::INET->new( Listen => SOMAXCONN, LocalAddr => '0.0.0.0', LocalPort => $tcp_port, Proto => 'tcp', ReuseAddr => 1, ); use Socket qw(IPPROTO_TCP TCP_NODELAY); setsockopt( $tcp_socket, IPPROTO_TCP, TCP_NODELAY, 1); use constant GROUP => '239.2.0.81'; use constant PORT => '6550'; my $udp_socket= IO::Socket::Multicast->new(Proto=>'udp',LocalPort=>PORT); $udp_socket->mcast_add(GROUP) || die "Couldn't set group: $!\n"; my $read_select = IO::Select->new(); my $write_select = IO::Select->new(); $read_select->add($tcp_socket); $read_select->add($udp_socket); ## Loop forever, reading data from the UDP socket and writing it to the ## TCP socket(s). while (1) { ## No timeout specified (see docs for IO::Select). This will block until a TCP ## client connects or we have data. my @read = $read_select->can_read(); foreach my $read (@read) { if ($read == $tcp_socket) { ## Handle connect from TCP client. Note that UDP connections are ## stateless (no accept necessary)... my $new_tcp = $read->accept(); $write_select->add($new_tcp); } elsif ($read == $udp_socket) { ## Handle data received from UDP socket... my $recv_buffer; $udp_socket->recv($recv_buffer, 1024, undef); ## Write the data read from UDP out to the TCP client(s). Again, no ## timeout. This will block until a TCP socket is writable. my @write = $write_select->can_write(); foreach my $write (@write) { ## Make sure the socket is still connected before writing. if ($write->connected()) { $write->send($recv_buffer); } else { $write_select->remove($write); close $write; } } } } }

    Read the article

  • C++ template function specialization using TCHAR on Visual Studio 2005

    - by Eli
    I'm writing a logging class that uses a templatized operator<< function. I'm specializing the template function on wide-character string so that I can do some wide-to-narrow translation before writing the log message. I can't get TCHAR to work properly - it doesn't use the specialization. Ideas? Here's the pertinent code: // Log.h header class Log { public: template <typename T> Log& operator<<( const T& x ); template <typename T> Log& operator<<( const T* x ); template <typename T> Log& operator<<( const T*& x ); ... } template <typename T> Log& Log::operator<<( const T& input ) { printf("ref"); } template <typename T> Log& Log::operator<<( const T* input ) { printf("ptr"); } template <> Log& Log::operator<<( const std::wstring& input ); template <> Log& Log::operator<<( const wchar_t* input ); And the source file // Log.cpp template <> Log& Log::operator<<( const std::wstring& input ) { printf("wstring ref"); } template <> Log& Log::operator<<( const wchar_t* input ) { printf("wchar_t ptr"); } template <> Log& Log::operator<<( const TCHAR*& input ) { printf("tchar ptr ref"); } Now, I use the following test program to exercise these functions // main.cpp - test program int main() { Log log; log << "test 1"; log << L"test 2"; std::string test3( "test3" ); log << test3; std::wstring test4( L"test4" ); log << test4; TCHAR* test5 = L"test5"; log << test4; } Running the above tests reveals the following: // Test results ptr wchar_t ptr ref wstring ref ref Unfortunately, that's not quite right. I'd really like the last one to be "TCHAR", so that I can convert it. According to Visual Studio's debugger, the when I step in to the function being called in test 5, the type is wchar_t*& - but it's not calling the appropriate specialization. Ideas? I'm not sure if it's pertinent or not, but this is on a Windows CE 5.0 device.

    Read the article

  • C++ file input/output search

    - by Brian J
    Hi I took the following code from a program I'm writing to check a user generated string against a dictionary as well as other validation. My problem is that although my dictionary file is referenced correctly,the program gives the default "no dictionary found".I can't see clearly what I'm doing in error here,if anyone has any tips or pointers it would be appreciated, Thanks. //variables for checkWordInFile #define gC_FOUND 99 #define gC_NOT_FOUND -99 // static bool certifyThat(bool condition, const char* error) { if(!condition) printf("%s", error); return !condition; } //method to validate a user generated password following password guidelines. void validatePass() { FILE *fptr; char password[MAX+1]; int iChar,iUpper,iLower,iSymbol,iNumber,iTotal,iResult,iCount; //shows user password guidelines printf("\n\n\t\tPassword rules: "); printf("\n\n\t\t 1. Passwords must be at least 9 characters long and less than 15 characters. "); printf("\n\n\t\t 2. Passwords must have at least 2 numbers in them."); printf("\n\n\t\t 3. Passwords must have at least 2 uppercase letters and 2 lowercase letters in them."); printf("\n\n\t\t 4. Passwords must have at least 1 symbol in them (eg ?, $, £, %)."); printf("\n\n\t\t 5. Passwords may not have small, common words in them eg hat, pow or ate."); //gets user password input get_user_password: printf("\n\n\t\tEnter your password following password rules: "); scanf("%s", &password); iChar = countLetters(password,&iUpper,&iLower,&iSymbol,&iNumber,&iTotal); iUpper = countLetters(password,&iUpper,&iLower,&iSymbol,&iNumber,&iTotal); iLower =countLetters(password,&iUpper,&iLower,&iSymbol,&iNumber,&iTotal); iSymbol =countLetters(password,&iUpper,&iLower,&iSymbol,&iNumber,&iTotal); iNumber = countLetters(password,&iUpper,&iLower,&iSymbol,&iNumber,&iTotal); iTotal = countLetters(password,&iUpper,&iLower,&iSymbol,&iNumber,&iTotal); if(certifyThat(iUpper >= 2, "Not enough uppercase letters!!!\n") || certifyThat(iLower >= 2, "Not enough lowercase letters!!!\n") || certifyThat(iSymbol >= 1, "Not enough symbols!!!\n") || certifyThat(iNumber >= 2, "Not enough numbers!!!\n") || certifyThat(iTotal >= 9, "Not enough characters!!!\n") || certifyThat(iTotal <= 15, "Too many characters!!!\n")) goto get_user_password; iResult = checkWordInFile("dictionary.txt", password); if(certifyThat(iResult != gC_FOUND, "Password contains small common 3 letter word/s.")) goto get_user_password; iResult = checkWordInFile("passHistory.txt",password); if(certifyThat(iResult != gC_FOUND, "Password contains previously used password.")) goto get_user_password; printf("\n\n\n Your new password is verified "); printf(password); //writing password to passHistroy file. fptr = fopen("passHistory.txt", "w"); // create or open the file for( iCount = 0; iCount < 8; iCount++) { fprintf(fptr, "%s\n", password[iCount]); } fclose(fptr); printf("\n\n\n"); system("pause"); }//end validatePass method int checkWordInFile(char * fileName,char * theWord){ FILE * fptr; char fileString[MAX + 1]; int iFound = -99; //open the file fptr = fopen(fileName, "r"); if (fptr == NULL) { printf("\nNo dictionary file\n"); printf("\n\n\n"); system("pause"); return (0); // just exit the program } /* read the contents of the file */ while( fgets(fileString, MAX, fptr) ) { if( 0 == strcmp(theWord, fileString) ) { iFound = -99; } } fclose(fptr); return(0); }//end of checkwORDiNFile

    Read the article

  • Using GA in GUI

    - by AlexT
    Sorry if this isn't clear as I'm writing this on a mobile device and I'm trying to make it quick. I've written a basic Genetic Algorithm with a binary encoding (genes) that builds a fitness value and evolves through several iterations using tournament selection, mutation and crossover. As a basic command-line example it seems to work. The problem I've got is with applying a genetic algorithm within a GUI as I am writing a maze-solving program that uses the GA to find a method through a maze. How do I turn my random binary encoded genes and fitness function (add all the binary values together) into a method to control a bot around a maze? I have built a basic GUI in Java consisting of a maze of labels (like a grid) with the available routes being in blue and the walls being in black. To reiterate my GA performs well and contains what any typical GA would (fitness method, get and set population, selection, crossover, etc) but now I need to plug it into a GUI to get my maze running. What needs to go where in order to get a bot that can move in different directions depending on what the GA says? Rough pseudocode would be great if possible As requested, an Individual is built using a separate class (Indiv), with all the main work being done in a Pop class. When a new individual is instantiated an array of ints represent the genes of said individual, with the genes being picked at random from a number between 0 and 1. The fitness function merely adds together the value of these genes and in the Pop class handles selection, mutation and crossover of two selected individuals. There's not much else to it, the command line program just shows evolution over n generations with the total fitness improving over each iteration. EDIT: It's starting to make a bit more sense now, although there are a few things that are bugging me... As Adamski has suggested I want to create an "Agent" with the options shown below. The problem I have is where the random bit string comes into play here. The agent knows where the walls are and has it laid out in a 4 bit string (i.e. 0111), but how does this affect the random 32 bit string? (i.e. 10001011011001001010011011010101) If I have the following maze (x is the start place, 2 is the goal, 1 is the wall): x 1 1 1 1 0 0 1 0 0 1 0 0 0 2 If I turn left I'm facing the wrong way and the agent will move completely off the maze if it moves forward. I assume that the first generation of the string will be completely random and it will evolve as the fitness grows but I don't get how the string will work within a maze. So, to get this straight... The fitness is the result of when the agent is able to move and is by a wall. The genes are a string of 32 bits, split into 16 sets of 2 bits to show the available actions and for the robot to move the two bits need to be passed with four bits from the agent showings its position near the walls. If the move is to go past a wall the move isn't made and it is deemed invalid and if the move is made and if a new wall is found then the fitness goes up. Is that right?

    Read the article

  • Do you have suggestions for these assembly mnemonics?

    - by Noctis Skytower
    Greetings! Last semester in college, my teacher in the Computer Languages class taught us the esoteric language named Whitespace. In the interest of learning the language better with a very busy schedule (midterms), I wrote an interpreter and assembler in Python. An assembly language was designed to facilitate writing programs easily, and a sample program was written with the given assembly mnemonics. Now that it is summer, a new project has begun with the objective being to rewrite the interpreter and assembler for Whitespace 0.3, with further developments coming afterwards. Since there is so much extra time than before to work on its design, you are presented here with an outline that provides a revised set of mnemonics for the assembly language. This post is marked as a wiki for their discussion. Have you ever had any experience with assembly languages in the past? Were there some instructions that you thought should have been renamed to something different? Did you find yourself thinking outside the box and with a different paradigm than in which the mnemonics were named? If you can answer yes to any of those questions, you are most welcome here. Subjective answers are appreciated! Stack Manipulation (IMP: [Space]) Stack manipulation is one of the more common operations, hence the shortness of the IMP [Space]. There are four stack instructions. hold N Push the number onto the stack copy Duplicate the top item on the stack copy N Copy the nth item on the stack (given by the argument) onto the top of the stack swap Swap the top two items on the stack drop Discard the top item on the stack drop N Slide n items off the stack, keeping the top item Arithmetic (IMP: [Tab][Space]) Arithmetic commands operate on the top two items on the stack, and replace them with the result of the operation. The first item pushed is considered to be left of the operator. add Addition sub Subtraction mul Multiplication div Integer Division mod Modulo Heap Access (IMP: [Tab][Tab]) Heap access commands look at the stack to find the address of items to be stored or retrieved. To store an item, push the address then the value and run the store command. To retrieve an item, push the address and run the retrieve command, which will place the value stored in the location at the top of the stack. save Store load Retrieve Flow Control (IMP: [LF]) Flow control operations are also common. Subroutines are marked by labels, as well as the targets of conditional and unconditional jumps, by which loops can be implemented. Programs must be ended by means of [LF][LF][LF] so that the interpreter can exit cleanly. L: Mark a location in the program call L Call a subroutine goto L Jump unconditionally to a label if=0 L Jump to a label if the top of the stack is zero if<0 L Jump to a label if the top of the stack is negative return End a subroutine and transfer control back to the caller halt End the program I/O (IMP: [Tab][LF]) Finally, we need to be able to interact with the user. There are IO instructions for reading and writing numbers and individual characters. With these, string manipulation routines can be written. The read instructions take the heap address in which to store the result from the top of the stack. print chr Output the character at the top of the stack print int Output the number at the top of the stack input chr Read a character and place it in the location given by the top of the stack input int Read a number and place it in the location given by the top of the stack Question: How would you redesign, rewrite, or rename the previous mnemonics and for what reasons?

    Read the article

  • Reading serialised object from file

    - by nico
    Hi everyone. I'm writing a little Java program (it's an ImageJ plugin, but the problem is not specifically ImageJ related) and I have some problem, most probably due to the fact that I never really programmed in Java before... So, I have a Vector of Vectors and I'm trying to save it to a file and read it. The variable is defined as: Vector <Vector <myROI> > ROIs = new Vector <Vector <myROI> >(); where myROI is a class that I previously defined. Now, to write the vector to a file I use: void saveROIs() { SaveDialog save = new SaveDialog("Save ROIs...", imp.getTitle(), ".xroi"); String name = save.getFileName(); if (name == null) return; String dir = save.getDirectory(); try { FileOutputStream fos = new FileOutputStream(dir+name); ObjectOutputStream oos = new ObjectOutputStream(fos); oos.writeObject(ROIs); oos.close(); } catch (Exception e) { IJ.log(e.toString()); } } This correctly generates a binary file containing (I suppose) the object ROIs. Now, I use a very similar code to read the file: void loadROIs() { OpenDialog open = new OpenDialog("Load ROIs...", imp.getTitle(), ".xroi"); String name = open.getFileName(); if (name == null) return; String dir = open.getDirectory(); try { FileInputStream fin = new FileInputStream(dir+name); ObjectInputStream ois = new ObjectInputStream(fin); ROIs = (Vector <Vector <myROI> >) ois.readObject(); // This gives error ois.close(); } catch (Exception e) { IJ.log(e.toString()); } } But this function does not work. First, I get a warning: warning: [unchecked] unchecked cast found : java.lang.Object required: java.util.Vector<java.util.Vector<myROI>> ROIs = (Vector <Vector <myROI> >) ois.readObject(); ^ I Googled for that and see that I can suppress by prepending @SuppressWarnings("unchecked"), but this just makes things worst, as I get an error: <identifier> expected ROIs = (Vector <Vector <myROI> >) ois.readObject(); ^ In any case, if I omit @SuppressWarnings and ignore the warning, the object is not read and an exception is thrown java.io.WriteAbortedException: writing aborted; java.io.NotSerializableException: myROI Again Google tells me myROI needs to implements Serializable. I tried just adding implements Serializable to the class definition, but it is not sufficient. Can anyone give me some hints on how to procede in this case? Also, how to get rid of the typecast warning?

    Read the article

  • mySQL to .XSL help

    - by kielie
    hi guys, I have to create a script that takes a mySQL table, and exports it into .XSL format, and then saves that file into a specified folder on the web host. I got it working, but now I can't seem to get it to automatically save the file to the location without prompting the user. It needs to run every day at a specified time, so it can save the previous days leads into a .XSL file on the web host. Here is the code: <?php // DB TABLE Exporter // // How to use: // // Place this file in a safe place, edit the info just below here // browse to the file, enjoy! // CHANGE THIS STUFF FOR WHAT YOU NEED TO DO $dbhost = "-"; $dbuser = "-"; $dbpass = "-"; $dbname = "-"; $dbtable = "-"; // END CHANGING STUFF $cdate = date("Y-m-d"); // get current date // first thing that we are going to do is make some functions for writing out // and excel file. These functions do some hex writing and to be honest I got // them from some where else but hey it works so I am not going to question it // just reuse // This one makes the beginning of the xls file function xlsBOF() { echo pack("ssssss", 0x809, 0x8, 0x0, 0x10, 0x0, 0x0); return; } // This one makes the end of the xls file function xlsEOF() { echo pack("ss", 0x0A, 0x00); return; } // this will write text in the cell you specify function xlsWriteLabel($Row, $Col, $Value ) { $L = strlen($Value); echo pack("ssssss", 0x204, 8 + $L, $Row, $Col, 0x0, $L); echo $Value; return; } // make the connection an DB query $dbc = mysql_connect( $dbhost , $dbuser , $dbpass ) or die( mysql_error() ); mysql_select_db( $dbname ); $q = "SELECT * FROM ".$dbtable." WHERE date ='$cdate'"; $qr = mysql_query( $q ) or die( mysql_error() ); // Ok now we are going to send some headers so that this // thing that we are going make comes out of browser // as an xls file. // header("Pragma: public"); header("Expires: 0"); header("Cache-Control: must-revalidate, post-check=0, pre-check=0"); header("Content-Type: application/force-download"); header("Content-Type: application/octet-stream"); header("Content-Type: application/download"); //this line is important its makes the file name header("Content-Disposition: attachment;filename=export_".$dbtable.".xls "); header("Content-Transfer-Encoding: binary "); // start the file xlsBOF(); // these will be used for keeping things in order. $col = 0; $row = 0; // This tells us that we are on the first row $first = true; while( $qrow = mysql_fetch_assoc( $qr ) ) { // Ok we are on the first row // lets make some headers of sorts if( $first ) { foreach( $qrow as $k => $v ) { // take the key and make label // make it uppper case and replace _ with ' ' xlsWriteLabel( $row, $col, strtoupper( ereg_replace( "_" , " " , $k ) ) ); $col++; } // prepare for the first real data row $col = 0; $row++; $first = false; } // go through the data foreach( $qrow as $k => $v ) { // write it out xlsWriteLabel( $row, $col, $v ); $col++; } // reset col and goto next row $col = 0; $row++; } xlsEOF(); exit(); ?> I tried using, fwrite to accomplish this, but it didn't seem to go very well, I removed the header information too, but nothing worked. Here is the original code, as I found it, any help would be greatly appreciated. :-) Thanx in advance. :-)

    Read the article

  • What's a clean way to have the server return a JavaScript function which would then be invoked?

    - by Khnle
    My application is architected as a host of plug-ins that have not yet been written. There's a long reason for this, but with each new year, the business logic will be different and we don't know what it will be like (Think of TurboTax if that helps). The plug-ins consist of both server and client components. The server components deals with business logic and persisting the data into database tables which will be created at a later time as well. The JavaScript manipulates the DOM for the browsers to render afterward. Each plugin lives in a separate assembly, so that they won't disturb the main application, i.e., we don't want to recompile the main application. Long story short, I am looking for a way to return JavaScript functions to the client from an Ajax get request, and execute these JavaScript functions (which are just returned). Invoking a function in Javascript is easy. The hard part is how to organize or structure so that I won't have to deal with maintenance problem. So concat using StringBuilder to end up with JavaScript code as a result of calling toString() from the string builder object is out of the question. I want to have no difference between writing JavaScript codes normally and writing Javascript codes for this dynamic purpose. An alternative is to manipulate the DOM on the server side, but I doubt that it would be as elegantly as using jQuery on the client side. I am open for a C# library that supports chainable calls like jQuery that also manipulates the DOM too. Do you have any idea or is it too much to ask or are you too confused? Edit1: The point is to avoid recompiling, hence the plug-ins architecture. In some other parts of the program, I already use the concept of dynamically loading Javascript files. That works fine. What I am looking here is somewhere in the middle of the program when an Ajax request is sent to the server. Edit 2: To illustrate my question: Normally, you would see the following code. An Ajax request is sent to the server, a JSON result is returned to the client which then uses jQuery to manipulate the DOM (creating tag and adding to the container in this case). $.ajax({ type: 'get', url: someUrl, data: {'': ''}, success: function(data) { var ul = $('<ul>').appendTo(container); var decoded = $.parseJSON(data); $.each(decoded, function(i, e) { var li = $('<li>').text(e.FIELD1 + ',' + e.FIELD2 + ',' + e.FIELD3); ul.append(li); }); } }); The above is extremely simple. But next year, what the server returns is totally different and how the data to be rendered would also be different. In a way, this is what I want: var container = $('#some-existing-element-on-the-page'); $.ajax({ type: 'get', url: someUrl, data: {'': ''}, success: function(data) { var decoded = $.parseJSON(data); var fx = decoded.fx; var data = decode.data; //fx is the dynamic function that create the DOM from the data and append to the existing container fx(container, data); } }); I don't need to know, at this time what data would be like, but in the future I will, and I can then write fx accordingly.

    Read the article

  • Log4j: Events appear in the wrong logfile

    - by Markus
    Hi there! To be able to log and trace some events I've added a LoggingHandler class to my java project. Inside this class I'm using two different log4j logger instances - one for logging an event and one for tracing an event into different files. The initialization block of the class looks like this: public void initialize() { System.out.print("starting logging server ..."); // create logger instances logLogger = Logger.getLogger("log"); traceLogger = Logger.getLogger("trace"); // create pattern layout String conversionPattern = "%c{2} %d{ABSOLUTE} %r %p %m%n"; try { patternLayout = new PatternLayout(); patternLayout.setConversionPattern(conversionPattern); } catch (Exception e) { System.out.println("error: could not create logger layout pattern"); System.out.println(e); System.exit(1); } // add pattern to file appender try { logFileAppender = new FileAppender(patternLayout, logFilename, false); traceFileAppender = new FileAppender(patternLayout, traceFilename, false); } catch (IOException e) { System.out.println("error: could not add logger layout pattern to corresponding appender"); System.out.println(e); System.exit(1); } // add appenders to loggers logLogger.addAppender(logFileAppender); traceLogger.addAppender(traceFileAppender); // set logger level logLogger.setLevel(Level.INFO); traceLogger.setLevel(Level.INFO); // start logging server loggingServer = new LoggingServer(logLogger, traceLogger, serverPort, this); loggingServer.start(); System.out.println(" done"); } To make sure that only only thread is using the functionality of a logger instance at the same time each logging / tracing method calls the logging method .info() inside a synchronized-block. One example looks like this: public void logMessage(String message) { synchronized (logLogger) { if (logLogger.isInfoEnabled() && logFileAppender != null) { logLogger.info(instanceName + ": " + message); } } } If I look at the log files, I see that sometimes a event appears in the wrong file. One example: trace 10:41:30,773 11080 INFO masterControl(192.168.2.21): string broadcast message was pushed from 1267093 to vehicle 1055293 (slaveControl 1) trace 10:41:30,784 11091 INFO masterControl(192.168.2.21): string broadcast message was pushed from 1156513 to vehicle 1105792 (slaveControl 1) trace 10:41:30,796 11103 INFO masterControl(192.168.2.21): string broadcast message was pushed from 1104306 to vehicle 1055293 (slaveControl 1) trace 10:41:30,808 11115 INFO masterControl(192.168.2.21): vehicle 1327879 was pushed to slave control 1 10:41:30,808 11115 INFO masterControl(192.168.2.21): string broadcast message was pushed from 1101572 to vehicle 106741 (slaveControl 1) trace 10:41:30,820 11127 INFO masterControl(192.168.2.21): string broadcast message was pushed from 1055293 to vehicle 1104306 (slaveControl 1) I think that the problem occures everytime two event happen at the same time (here: 10:41:30,808). Does anybody has an idea how to solve my problem? I already tried to add a sleep() after the method call, but that doesn't helped ... BR, Markus Edit: logtrace 11:16:07,75511:16:07,755 1129711297 INFOINFO masterControl(192.168.2.21): string broadcast message was pushed from 1291400 to vehicle 1138272 (slaveControl 1)masterControl(192.168.2.21): vehicle 1333770 was added to slave control 1 or log 11:16:08,562 12104 INFO 11:16:08,562 masterControl(192.168.2.21): string broadcast message was pushed from 117772 to vehicle 1217744 (slaveControl 1) 12104 INFO masterControl(192.168.2.21): vehicle 1169775 was pushed to slave control 1 Edit 2: It seems like the problem only occurs if logging methods are called from inside a RMI thread (my client / server exchange information using RMI connections). ... Edit 3: I solved the problem by myself: It seems like log4j is NOT completely thread-save. After synchronizing all log / trace methods using a separate object everything is working fine. Maybe the lib is writing the messages to a thread-unsafe buffer before writing them to file?

    Read the article

  • Can't figure out this file behavior in IOS 4.2

    - by Don Jones
    Odd behavior with file behavior. Here's the thing: I'm using the phone camera to snap a picture, and internally generating a thumbnail. I'm saving those as temp files in the Documents directory. Here's the complete code: - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info { UIImage *photo = [info objectForKey:@"UIImagePickerControllerOriginalImage"]; photo = [self scaleAndRotateImage:photo]; // save photo NSString *docsDir = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents"]; NSLog(@"Writing to %@",docsDir); // save photo file NSLog(@"Saving photo as %@",[NSString stringWithFormat:@"%@/temp_photo.png",docsDir]); [UIImagePNGRepresentation(photo) writeToFile:[NSString stringWithFormat:@"%@/temp_photo.png",docsDir] atomically:YES]; // make and save thumbnail NSLog(@"Saving thumbnail as %@",[NSString stringWithFormat:@"%@/temp_thumb.png",docsDir]); UIImage *thumb = [self makeThumbnail:photo]; [UIImagePNGRepresentation(thumb) writeToFile:[NSString stringWithFormat:@"%@/temp_thumb.png",docsDir] atomically:YES]; NSFileManager *manager = [NSFileManager defaultManager]; NSError *error; NSArray *files = [manager contentsOfDirectoryAtPath:docsDir error:&error]; NSLog(@"\n\nThere are %d files in the documents directory %@",(files.count),docsDir); for (int i = 0; i < files.count; i++) { NSString *filename = [files objectAtIndex:i]; NSLog(@"Seeing filename %@",filename); } // done //[photo release]; //[thumb release]; [self dismissModalViewControllerAnimated:NO]; [delegate textInputFinished]; } As you can see, I've put quite a bit of logging in here in an attempt to figure out my problem (which is coming up). The log output to this point is: Writing to /var/mobile/Applications/77D792DC-A224-4A47-8A4C-BB7C557626F3/Documents Saving photo as /var/mobile/Applications/77D792DC-A224-4A47-8A4C-BB7C557626F3/Documents/temp_photo.png Saving thumbnail as /var/mobile/Applications/77D792DC-A224-4A47-8A4C-BB7C557626F3/Documents/temp_thumb.png There are 3 files in the documents directory /var/mobile/Applications/77D792DC-A224-4A47-8A4C-BB7C557626F3/Documents Seeing filename temp_photo.png Seeing filename temp_text.txt Seeing filename temp_thumb.png This is absolutely as-expected. I clearly have three files on the device. Here's the very next code that operates - the code that received the textInputFinished message: - (void)textInputFinished { NSFileManager *fileManager = [NSFileManager defaultManager]; NSString *docsDir = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents"]; NSError *error; // get next filename NSString *filename; int i = 0; do { i++; filename = [NSString stringWithFormat:@"%@/reminder_%d",docsDir,i]; NSLog(@"Trying filename %@",filename); } while ([fileManager fileExistsAtPath:[filename stringByAppendingString:@".txt"]]); NSLog(@"New base filename is %@",filename); NSArray *files = [fileManager contentsOfDirectoryAtPath:docsDir error:&error]; NSLog(@"There are %d files in the directory %@",(files.count),docsDir); This is testing to get a new, non-temp, not-in-use filename. It does that - but then it says there aren't any files in the documents folder! Here's the logged output: Trying filename /var/mobile/Applications/77D792DC-A224-4A47-8A4C-BB7C557626F3/Documents/reminder_1 New base filename is /var/mobile/Applications/77D792DC-A224-4A47-8A4C-BB7C557626F3/Documents/reminder_1 There are 0 files in the directory /var/mobile/Applications/77D792DC-A224-4A47-8A4C-BB7C557626F3/Documents What the heck? Where did the three files go?

    Read the article

  • Why Do I See the "In Recovery" Msg, and How Can I Prevent it?

    - by John Hansen
    The project I'm working on creates a local copy of the SQL Server database for each SVN branch you work on. We're running SQL Server 2008 Express with Advanced Services on our local machine to host it. When we create a new branch, the build script will create a new database with the ID of that branch, creates the schema objects, and copies over a selection of data from the production shadow server. After the database is created, it, or other databases on the local machine, will often go into "In Recovery" mode for several minutes. After several refreshes it comes up and is happy, but will occasionally go back into "In Recovery" mode. The database is created in simple recovery mode. The file names aren't specified, so it uses default paths for files. The size of the database after loading data is ~400 megs. It is running in SQL Server 2005 compatibility mode. The command that creates the database is: sqlcmd -S $(DBServer) -Q "IF NOT EXISTS (SELECT [name] FROM sysdatabases WHERE [name] = '$(DBName)') BEGIN CREATE DATABASE [$(DBName)]; print 'Created $(DBName)'; END" ...where $(DBName) and $(DBServer) are MSBuild parameters. I got a nice clean log file this morning. When I turned on my computer it starts all five databases. However, two of them show transactions being rolled forward and backwards. The it just keeps trying to start up all five of the databases. 2010-06-10 08:24:59.74 spid52 Starting up database 'ASPState'. 2010-06-10 08:24:59.82 spid52 Starting up database 'CommunityLibrary'. 2010-06-10 08:25:03.97 spid52 Starting up database 'DLG-R8441'. 2010-06-10 08:25:05.07 spid52 2 transactions rolled forward in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:05.14 spid52 0 transactions rolled back in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:05.14 spid52 Recovery is writing a checkpoint in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:11.23 spid52 Starting up database 'DLG-R8979'. 2010-06-10 08:25:12.31 spid36s Starting up database 'DLG-R8441'. 2010-06-10 08:25:13.17 spid52 2 transactions rolled forward in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:13.22 spid52 0 transactions rolled back in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:13.22 spid52 Recovery is writing a checkpoint in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:18.43 spid52 Starting up database 'Rls QA'. 2010-06-10 08:25:19.13 spid46s Starting up database 'DLG-R8979'. 2010-06-10 08:25:23.29 spid36s Starting up database 'DLG-R8441'. 2010-06-10 08:25:27.91 spid52 Starting up database 'ASPState'. 2010-06-10 08:25:29.80 spid41s Starting up database 'DLG-R8979'. 2010-06-10 08:25:31.22 spid52 Starting up database 'Rls QA'. In this case it kept trying to start the databases continuously until I shut down SQL Server at 08:48:19.72, 23 minutes later. Meanwhile, I actually am able to use the databases much of the time.

    Read the article

  • Use of select or multithread for almost 80 or more clients?

    - by Tushar Goel
    I am working on one project in which i need to read from 80 or more clients and then write their o/p into a file continuously and then read these new data for another task. My question is what should i use select or multithreading? Also I tried to use multi threading using read/fgets and write/fputs call but as they are blocking calls and one operation can be performed at one time so it is not feasible. Any idea is much appreciated. update 1: I have tried to implement the same using condition variable. I able to achieve this but it is writing and reading one at a time.When another client tried to write then it cannot able to write unless i quit from the 1st thread. I do not understand this. This should work now. What mistake i am doing? Update 2: Thanks all .. I am able to succeeded to get this model implemented using mutex condition variable. updated Code is as below: **header file******* char *mailbox ; pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER ; pthread_cond_t writer = PTHREAD_COND_INITIALIZER; int main(int argc,char *argv[]) { pthread_t t1 , t2; pthread_attr_t attr; int fd, sock , *newfd; struct sockaddr_in cliaddr; socklen_t clilen; void *read_file(); void *update_file(); //making a server socket if((fd=make_server(atoi(argv[1])))==-1) oops("Unable to make server",1) //detaching threads pthread_attr_init(&attr); pthread_attr_setdetachstate(&attr,PTHREAD_CREATE_DETACHED); ///opening thread for reading pthread_create(&t2,&attr,read_file,NULL); while(1) { clilen = sizeof(cliaddr); //accepting request sock=accept(fd,(struct sockaddr *)&cliaddr,&clilen); //error comparison against failire of request and INT if(sock==-1 && errno != EINTR) oops("accept",2) else if ( sock ==-1 && errno == EINTR) oops("Pressed INT",3) newfd = (int *)malloc(sizeof(int)); *newfd = sock; //creating thread per request pthread_create(&t1,&attr,update_file,(void *)newfd); } free(newfd); return 0; } void *read_file(void *m) { pthread_mutex_lock(&lock); while(1) { printf("Waiting for lock.\n"); pthread_cond_wait(&writer,&lock); printf("I am reading here.\n"); printf("%s",mailbox); mailbox = NULL ; pthread_cond_signal(&writer); } } void *update_file(int *m) { int sock = *m; int fs ; int nread; char buffer[BUFSIZ] ; if((fs=open("database.txt",O_RDWR))==-1) oops("Unable to open file",4) while(1) { pthread_mutex_lock(&lock); write(1,"Waiting to get writer lock.\n",29); if(mailbox != NULL) pthread_cond_wait(&writer,&lock); lseek(fs,0,SEEK_END); printf("Reading from socket.\n"); nread=read(sock,buffer,BUFSIZ); printf("Writing in file.\n"); write(fs,buffer,nread); mailbox = buffer ; pthread_cond_signal(&writer); pthread_mutex_unlock(&lock); } close(fs); }

    Read the article

  • What’s New from the Oracle Marketing Cloud at Oracle OpenWorld 2014?

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Marketing—CX Central is your hub for all things Marketing related at OpenWorld in San Francisco, September 28-October 2, 2014. Learn how to personalize the modern marketing journey to improve customer loyalty. We’re hosting more than 60 breakout sessions, half of which will highlight customer success stories from marquee brands including Bizo, Comcast, Dell, Epson, John Deere, Lane Bryant, ReadyTalk and Shutterfly. Moscone West, Levels 2 and 3 To learn more about how modern marketing works, visit Moscone West, levels 2 and 3, for exciting demos of each of the Oracle Marketing Cloud solutions (BlueKai, Compendium, Eloqua, Push I/O, and Responsys). You also can check out our stations for Vertical Marketing Best Practices, the Markie Awards, and more! CX Spotlight Sessions “Accelerating Big Profits in Big Data,” Jeff Tanner, Baylor University “Using Content Marketing to Impact Every Stage of the Buyer’s Journey,” Jennifer Agustin, Bizo “Expanding Your Marketing with Proven Testing and Optimization,” Brian Border, Shutterfly and Matthew Balthazor, Epson “Modern Marketing: The New Digital Dialogue,” Cory Treffiletti, Oracle A Special Marquee Session Dell’s Hayden Mugford will speak on “The Digital Ecosystem: Driving Experience Through Contact Engagement.” She will highlight how the organization built a digital ecosystem that supports a behaviorally driven, multivehicle nurturing campaign. The Dell 1:1 Global Marketing team worked with multiple partners to innovate integrations with Oracle Eloqua, Oracle Real-Time Decisions for real-time decision logic, and a content management system (CMS) that enables 100 percent customized e-mails. The program doubled average order values for nurtured contacts versus non-nurtured and tripled open and click-through rates versus push e-mail. Other Oracle Marketing Cloud Session Highlights Thought leadership by role Exploring the benefits of moving to the Cloud Product line roadmaps and innovations in Marketing Technical deep dives for product lines within Marketing Best practices and impactful business measurements Solutions that are Integrated across CX Target Audience Session content is geared toward professionals in Marketing, Marketing Operations, Marketing Demand Generation, Social: Chief Marketing Officers, Vice Presidents, Directors and Managers. Outcomes Customers attending Marketing—CX Central @ OpenWorld will be able to: Gain insight into delivering consistent cross-channel marketing Discover how to provide the right information to the right customer at the right time and with the right channel Get answers to burning questions and advice on business challenges Hear from other Oracle customers about recommended best practices to help their organization move forward Network and share ideas to help create a strategy for connecting with customers in better ways It Wouldn’t Be an Oracle Marketing Cloud Event Without a Party! We’re hosting CX Central Fest:  a unique customer experience specifically designed for attendees of CX Central. It will include a chance to rock out at a private concert featuring Los Angeles indie electronic pop group, Capital Cities! Join us Tuesday, September 30 from 7-9 p.m. OpenWorld is a fabulous way for your customers to see all that Oracle Marketing Cloud has to offer. Pass on an invitation today. By Laura Vogel (Oracle) /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • ASP.NET JavaScript Routing for ASP.NET MVC–Constraints

    - by zowens
    If you haven’t had a look at my previous post about ASP.NET routing, go ahead and check it out before you read this post: http://weblogs.asp.net/zowens/archive/2010/12/20/asp-net-mvc-javascript-routing.aspx And the code is here: https://github.com/zowens/ASP.NET-MVC-JavaScript-Routing   Anyways, this post is about routing constraints. A routing constraint is essentially a way for the routing engine to filter out route patterns based on the day from the URL. For example, if I have a route where all the parameters are required, I could use a constraint on the required parameters to say that the parameter is non-empty. Here’s what the constraint would look like: Notice that this is a class that inherits from IRouteConstraint, which is an interface provided by System.Web.Routing. The match method returns true if the value is a match (and can be further processed by the routing rules) or false if it does not match (and the route will be matched further along the route collection). Because routing constraints are so essential to the route matching process, it was important that they be part of my JavaScript routing engine. But the problem is that we need to somehow represent the constraint in JavaScript. I made a design decision early on that you MUST put this constraint into JavaScript to match a route. I didn’t want to have server interaction for the URL generation, like I’ve seen in so many applications. While this is easy to maintain, it causes maintenance issues in my opinion. So the way constraints work in JavaScript is that the constraint as an object type definition is set on the route manager. When a route is created, a new instance of the constraint is created with the specific parameter. In its current form the constraint function MUST return a function that takes the route data and will return true or false. You will see the NotEmpty constraint in a bit. Another piece to the puzzle is that you can have the JavaScript exist as a string in your application that is pulled in when the routing JavaScript code is generated. There is a simple interface, IJavaScriptAddition, that I have added that will be used to output custom JavaScript. Let’s put it all together. Here is the NotEmpty constraint. There’s a few things at work here. The constraint is called “notEmpty” in JavaScript. When you add the constraint to a parameter in your C# code, the route manager generator will look for the JsConstraint attribute to look for the name of the constraint type name and fallback to the class name. For example, if I didn’t apply the “JsConstraint” attribute, the constraint would be called “NotEmpty”. The JavaScript code essentially adds a function to the “constraintTypeDefs” object on the “notEmpty” property (this is how constraints are added to routes). The function returns another function that will be invoked with routing data. Here’s how you would use the NotEmpty constraint in C# and it will work with the JavaScript routing generator. The only catch to using route constraints currently is that the following is not supported: The constraint will work in C# but is not supported by my JavaScript routing engine. (I take pull requests so if you’d like this… go ahead and implement it).   I just wanted to take this post to explain a little bit about the background on constraints. I am looking at expanding the current functionality, but for now this is a good start. Thanks for all the support with the JavaScript router. Keep the feedback coming!

    Read the article

  • Tool to convert blogger.com content to dasBlog

    - by Daniel Moth
    Due to blogger.com dropping FTP support, I've had to move my blog. If you are in a similar situation, this post will help you by showing you the necessary steps to take. Goals No loss on blog posts, comments AND all existing permalinks continue to work (redirect to the correct place). Steps Download the XML files corresponding to your blogger.com content and store them in a folder. Install and configure dasBlog on your local machine. Configure your web.config file (will need updating once you run step 4). Use the tool I describe further down to generate the content and place it at the right place. Test your site locally. Once you are happy, repeat step 2 on your hosting provider of choice. Remember to copy up your dasBlog theme folder if you created one. Copy up the local web.config file and the XML dasBlog content files generated by the tool of step 4. Test your site on the server. Once you are happy, go live (following instructions from your hoster). In my case, I gave the nameservers from my new hoster to my existing domain registrar and they made the switch. Tool (code) At step 4 above I referred to a tool. That is an overstatement, it is simply one 450-line C#code file that you can download here: BloggerToDasBlog.cs. I used this from a .NET 2.0 console app (and I run it under the Visual Studio debugger, i.e. F5) like this: Program.cs. The console app referenced the dasBlog 2.3 ASP.NET Blogging Engine i.e. the newtelligence.DasBlog.Runtime.dll assembly. Let me describe what the code does: Input: A path to a folder where the XML files from the old blogger.com blog reside. It can deal with both types of XML file. A full file path to a file where it creates XML redirect input (as required by the rewriteMap mentioned here). The blog URL. The author's email. The blog author name. A path to an empty folder where the new XML dasBlog content files will get created. The subfolder name used after the domain name in the URL. The 3 reg ex patterns to use. You can use the same as mine, but will need to tweak the monthly_archive rule. Again, to see what values I passed for all the above, see my Program.cs file. Output: It creates dasBlog XML files in the folder specified. It creates those by parsing the old blogger.com XML files that reside in the folder specified. After that is generated, copy it to the "Content" folder under your dasBlog installation. It creates an XML file with a single ignorable root element and a bunch of inner XML elements. You can copy paste these in the web.config file as discussed in this post. Other notes: For each blog post, it detects outgoing links to itself (i.e. to the same blog), and rewrites those to point to the new URLs. So internal links do not rely on the web.config redirects. It deals with duplicate post titles; it does not deal with triplicates and higher. Removes all references to blogger.com (e.g. references to [email protected], the injected hidden footer for statistics that each blog post has and others – see the code). It creates a lot of diagnostic output (in the Output window) and indeed the documentation for the code is in the Debug.WriteLine statements ;) This is not code I will maintain or support – it was a throwaway one-use project that I am sharing here as a starting point for anyone finding themselves in the same boat that I was. Enjoy "as is". Comments about this post welcome at the original blog.

    Read the article

  • Data Quality and Master Data Management Resources

    - by Dejan Sarka
    Many companies or organizations do regular data cleansing. When you cleanse the data, the data quality goes up to some higher level. The data quality level is determined by the amount of work invested in the cleansing. As time passes, the data quality deteriorates, and you need to repeat the cleansing process. If you spend an equal amount of effort as you did with the previous cleansing, you can expect the same level of data quality as you had after the previous cleansing. And then the data quality deteriorates over time again, and the cleansing process starts over and over again. The idea of Data Quality Services is to mitigate the cleansing process. While the amount of time you need to spend on cleansing decreases, you will achieve higher and higher levels of data quality. While cleansing, you learn what types of errors to expect, discover error patterns, find domains of correct values, etc. You don’t throw away this knowledge. You store it and use it to find and correct the same issues automatically during your next cleansing process. The following figure shows this graphically. The idea of master data management, which you can perform with Master Data Services (MDS), is to prevent data quality from deteriorating. Once you reach a particular quality level, the MDS application—together with the defined policies, people, and master data management processes—allow you to maintain this level permanently. This idea is shown in the following picture. OK, now you know what DQS and MDS are about. You can imagine the importance on maintaining the data quality. Here are some resources that help you preparing and executing the data quality (DQ) and master data management (MDM) activities. Books Dejan Sarka and Davide Mauri: Data Quality and Master Data Management with Microsoft SQL Server 2008 R2 – a general introduction to MDM, MDS, and data profiling. Matching explained in depth. Dejan Sarka, Matija Lah and Grega Jerkic: MCTS Self-Paced Training Kit (Exam 70-463): Building Data Warehouses with Microsoft SQL Server 2012 – I wrote quite a few chapters about DQ and MDM, and introduced also SQL Server 2012 DQS. Thomas Redman: Data Quality: The Field Guide – you should start with this book. Thomas Redman is the father of DQ and MDM. Tyler Graham: Microsoft SQL Server 2012 Master Data Services – MDS in depth from a product team mate. Arkady Maydanchik: Data Quality Assessment – data profiling in depth. Tamraparni Dasu, Theodore Johnson: Exploratory Data Mining and Data Cleaning – advanced data profiling with data mining. Forthcoming presentations I am presenting a DQS and MDM seminar at PASS SQL Rally Amsterdam 2013: Wednesday, November 6th, 2013: Enterprise Information Management with SQL Server 2012 – a good kick start to your first DQ and / or MDM project. Courses Data Quality and Master Data Management with SQL Server 2012 – I wrote a 2-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. Start improving the quality of your data now!

    Read the article

  • Home Energy Management & Automation with Windows Phone 7

    A number of people at Clarity are personally interested in home energy conservation and home automation. We feel that a mobile device is a great fit for bringing this idea to fruition. While this project is merely a concept and not directly associated with Microsofts Hohm web service, it provides a great model for communicating the concept. I wanted to take the idea a step further and combine saving energy in your home with the ability to track water usage and control your home devices. I designed an application that focuses on total home control and not just energy usage. Application Overview By monitoring home consumption in real time and with yearly projections users can pinpoint vampire devices, times of high or low consumption, and wasteful patterns of energy use. Energy usage meters indicate total current consumption as well as individual device consumption. Users can then use the information to take action, make adjustments, and change their consumption behaviors. The app can be used to automate certain systems like lighting, temperature, or alarms. Other features can be turned on an off at the touch of a toggle switch on your phone, away from home. Forget to turn off the TV or shut the garage door? No problem, you can do it from your phone. Through settings you can enable and disable features of the phone that apply to your home making it a completely customized and convenient experience. To be clear, this equates to more security, big environmental impact, and even bigger savings.   Design and User Interface  Since this panorama application is designed for win phone 7 devices, it complies with the UI Design and Interaction Guide for wp7. I developed the frame and page hierarchy from existing examples. The interface takes advantage of the interactive nature of touch screens with slider controls, pivot control views, and toggle switches to turn on and off devices (not shown in mockup). I followed recommendations for text based elements and adapted the tile notifications to display the most recent user activity. For example, the mockup indicates upon launching the app that the last thing you did was program the thermostat. This model is great for quick launching common user actions. One last design feature to point out is the technical reasons for supplying both light and dark themes for the app. Since this application is targeting energy consumption it only makes sense to consider the effect of the apps background color or image on the phones energy use. When displaying darker colors like black the OLED display may use less power, extending battery life. Other Considerations For now I left out options of wind and solar powered energy options because they are not available to everyone. Renewable energy sources and new technologies associated with them are definitely ideas to keep in mind for a next iteration. Another idea to explore for such an application would be to include a savings model similar to mint.com. In addition to general energy-saving recommendations the application could recommend customized ways to save based on your current utility providers and available options in your area. If your television or refrigerator is guilty of sucking a lot of energy then you may see recommendations for energy star products that could save you even more money! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Redirect all access requests to a domain and subdomain(s) except from specific IP address? [closed]

    - by Christopher
    This is a self-answered question... After much wrangling I found the magic combination of mod_rewrite rules so I'm posting here. My scenario is that I have two domains - domain1.com and domain2.com - both of which are currently serving identical content (by way of a global 301 redirect from domain1 to domain2). Domain1 was then chosen to be repurposed to be a 'portal' domain - with a corporate CMS-based site leading off from the front page, and the existing 'retail' domain (domain2) left to serve the main web site. In addition, a staging subdomain was created on domain1 in order to prepare the new corporate site without impinging on the root domain's existing operation. I contemplated just rewriting all requests to domain2 and setting up the new corporate site 'behind the scenes' without using a staging domain, but I usually use subdomains when setting up new sites. Finally, I required access to the 'actual' contents of the domains and subdomains - i.e., to not be redirected like all other visitors - in order that I can develop the new site and test it in the staging environment on the live server, as I'm not using a separate development webserver in this case. I also have another test subdomain on domain1 which needed to be preserved. The way I eventually set it up was as follows: (10.2.2.1 would be my home WAN IP) .htaccess in root of domain1 RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} !^staging.domain1.com$ [NC] RewriteCond %{HTTP_HOST} !^staging2.domain1.com$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301] .htaccess in staging subdomain on domain1: RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} ^staging.revolver.coop$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301,L] The multiple .htaccess files and multiple rulesets require more processing overhead and longer iteration as the visitor is potentially redirected twice, however I find it to be a more granular method of control as I can selectively allow more than one IP address access to individual staging subdomain(s) without automatically granting them access to everything else. It also keeps the rulesets fairly simple and easy to read. (or re-interpret, because I'm always forgetting how I put rules together!) If anybody can suggest a more efficient way of merging all these rules and conditions into just one main ruleset in the root of domain1, please post! I'm always keen to learn, this post is more my attempt to preserve this information for those who are looking to redirect entire domains for all visitors except themselves (for design/testing purposes) and not just denying specific file access for maintenance mode (there are many good examples of simple mod_rewrite rules for 'maintenance mode' style operation easily findable via Google). You can also extend the IP address detection - firstly by using wildcards ^10\.2\.2\..*: the last octet's \..* denotes the usual "." and then "zero or more arbitrary characters", signified by the .* - so you can specify specific ranges of IPs in a subnet or entire subnets if you wish. You can also use square brackets: ^10\.2\.[1-255]\.[120-140]; ^10\.2\.[1-9]?[0-9]\.; ^10\.2\.1[0-1][0-9]\. etc. The third way, if you wish to specify multiple discrete IP addresses, is to bracket them in the style of ^(1.1.1.1|2.2.2.2|3.3.3.3)$, and you can of course use square brackets to substitute octets or single digits again. NB: if you're using individual RewriteCond lines to specify multiple IPs / ranges, make sure to put [OR] at the end of each one otherwise mod_rewrite will interpret as "if IP address matches 1.1.1.1 AND if IP address matches 2.2.2.2... which is of course impossible! However as far as I'm aware this isn't necessary if you're using the ! negator to specify "and is not...". Kudos also to SE: this older question also came in useful when I was verifying my own knowledge prior to my futzing around with code. This page was helpful, as were the various other links posted below (can't hyperlink them all due to spam protection... other regex checkers are available). The AddedBytes cheat sheet's useful to pin up on your wall. Other referenced URLs: internetofficer.com/seo-tool/regex-tester/ fantomaster.com/faarticles/rewritingurls.txt internetofficer.com/seo-tool/regex-tester/ addedbytes.com/cheat-sheets/mod_rewrite-cheat-sheet/

    Read the article

  • Drupal Modules for SEO & Content

    - by Aditi
    When we talk about Drupal SEO, there are two things to consider one is about the relevant SEO practices and about appropriate Drupal Modules available. Optimizing your website for search engines is one of the most important aspect of launching & promoting your website especially if ranking matters to you. Understanding SEO For starters, you have begin with Keyword research and then optimize your content according to your findings by tagging, meta tags etc, Drupal modules once installed help you manage a lot of such parameters. Identifying the target keywords Using the Page Title and Token modules PathAuto configuration <H1> heading tags Optimizing Drupal’s default robots.txt file Etc. While Drupal gives you a lot of ability to make your website content worthy & search engine friendly it is important for you to make sure you are not crossing the line or you could get penalized. Modules Overview Drupal Power is at its best when you have these modules & great brain working together. The basic SEO improvements can be achieved easily with the modules enlisted below, but you can win magical rankings if you use them logically & wisely. Understanding your keyword competition & enhancing your content is the basic key to success and ofcourse the modules: Pathauto Automatically create search enging friendly readable URLS from tokens. A token is a piece of data from content, say the author’s username, or the content’s title. For example mysite.com/an-article, rather than mysite.com/node/114 for every node you make. NodeWords Amazingly useful drupal module that allows you to create custom meta tags and descriptions for your nodes, which gives you the ability to target specific keywords and phrases. Page Title Enables you to set an alternative title for the <title></title> tags and for the <h1></h1> tags on a node. Global Redirect Manage content duplication, 301 redirects, and URL validation with this small, but powerful module. Taxonomy manager Make large additions, or changes to taxonomy very easy. This module provides a powerful interface for managing taxonomies. A vocabulary gets displayed in a dynamic tree view, where parent terms can be expanded to list their nested child terms or can be collapsed. robotstxt A robots.txt file is vital for ensuring that search engine spiders don’t index the unwanted areas of your site. This Drupal module gives you the ability to manage your robots.txt file through the CMS admin. xmlsitemap An XML Sitemap lets the search engines index your website content. This module helps in generating and maintaining a complete sitemap for your website and gives you control over exactly which parts of the site you want to be included in the index. It even gives you the ability to automatically submit your sitemap to Google, Yahoo!, Ask.com and Windows Live every time you update a node or at specific interval. Node Import This module allows you to import a set of nodes from a Comma Seperated Values (CSV) or Tab Seperated Values (TSV) text file. Makes it easy to import hundreds-thousands of csv rows and you get to tie up these rows to CCK fields (or locations), and it can file it under the right taxonomy hierarchy. This is Super life saver module.

    Read the article

  • More SQL Smells

    - by Nick Harrison
    Let's continue exploring some of the SQL Smells from Phil's list. He has been putting together. Datatype mis-matches in predicates that rely on implicit conversion.(Plamen Ratchev) This is a great example poking holes in the whole theory of "If it works it's not broken" Queries will this probably will generally work and give the correct response. In fact, without careful analysis, you probably may be completely oblivious that there is even a problem. This subtle little problem will needlessly complicate queries and slow them down regardless of the indexes applied. Consider this example: CREATE TABLE [dbo].[Page](     [PageId] [int] IDENTITY(1,1) NOT NULL,     [Title] [varchar](75) NOT NULL,     [Sequence] [int] NOT NULL,     [ThemeId] [int] NOT NULL,     [CustomCss] [text] NOT NULL,     [CustomScript] [text] NOT NULL,     [PageGroupId] [int] NOT NULL;  CREATE PROCEDURE PageSelectBySequence ( @sequenceMin smallint , @sequenceMax smallint ) AS BEGIN SELECT [PageId] , [Title] , [Sequence] , [ThemeId] , [CustomCss] , [CustomScript] , [PageGroupId] FROM [CMS].[dbo].[Page] WHERE Sequence BETWEEN @sequenceMin AND @SequenceMax END  Note that the Sequence column is defined as int while the sequence parameter is defined as a small int. The problem is that the database may have to do a lot of type conversions to evaluate the query. In some cases, this may even negate the indexes that you have in place. Using Correlated subqueries instead of a join   (Dave_Levy/ Plamen Ratchev) There are two main problems here. The first is a little subjective, since this is a non-standard way of expressing the query, it is harder to understand. The other problem is much more objective and potentially problematic. You are taking much of the control away from the optimizer. Written properly, such a query may well out perform a corresponding query written with traditional joins. More likely than not, performance will degrade. Whenever you assume that you know better than the optimizer, you will most likely be wrong. This is the fundmental problem with any hint. Consider a query like this:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , ( SELECT EffectName FROM dbo.Effect WHERE EffectId = dbo.PageEffects.EffectId ) AS EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId  This can and should be written as:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId INNER JOIN dbo.Effect ON dbo.Effects.EffectId = dbo.PageEffects.EffectId  The correlated query may just as easily show up in the where clause. It's not a good idea in the select clause or the where clause. Few or No comments. This one is a bit more complicated and controversial. All comments are not created equal. Some comments are helpful and need to be included. Other comments are not necessary and may indicate a problem. I tend to follow the rule of thumb that comments that explain why are good. Comments that explain how are bad. Many people may be shocked to hear the idea of a bad comment, but hear me out. If a comment is needed to explain what is going on or how it works, the logic is too complex and needs to be simplified. Comments that explain why are good. Comments may explain why the sql is needed are good. Comments that explain where the sql is used are good. Comments that explain how tables are related should not be needed if the sql is well written. If they are needed, you need to consider reworking the sql or simplify your data model. Use of functions in a WHERE clause. (Anil Das) Calling a function in the where clause will often negate the indexing strategy. The function will be called for every record considered. This will often a force a full table scan on the tables affected. Calling a function will not guarantee that there is a full table scan, but there is a good chance that it will. If you find that you often need to write queries using a particular function, you may need to add a column to the table that has the function already applied.

    Read the article

  • What’s New from the Oracle Marketing Cloud at Oracle OpenWorld 2014

    - by Kathryn Perry
    A Guest Post by Laura Vogel, Director, Oracle Marketing Cloud Events (pictured left) Marketing—CX Central is your hub for all things Marketing related at OpenWorld in San Francisco, September 28-October 2, 2014. Learn how to personalize the modern marketing journey to improve customer loyalty. We’re hosting more than 60 breakout sessions, half of which will highlight customer success stories from marquee brands including Bizo, Comcast, Dell, Epson, John Deere, Lane Bryant, ReadyTalk and Shutterfly. Moscone West, Levels 2 and 3To learn more about how modern marketing works, visit Moscone West, levels 2 and 3, for exciting demos of each of the Oracle Marketing Cloud solutions (BlueKai, Compendium, Eloqua, Push I/O, and Responsys). You also can check out our stations for Vertical Marketing Best Practices, the Markie Awards, and more! CX Spotlight Sessions “Accelerating Big Profits in Big Data,” Jeff Tanner, Baylor University “Using Content Marketing to Impact Every Stage of the Buyer’s Journey,” Jennifer Agustin, Bizo “Expanding Your Marketing with Proven Testing and Optimization,” Brian Border, Shutterfly and Matthew Balthazor, Epson “Modern Marketing: The New Digital Dialogue,” Cory Treffiletti, Oracle A Special Marquee SessionDell’s Hayden Mugford will speak on "The Digital Ecosystem: Driving Experience Through Contact Engagement.” She will highlight how the organization built a digital ecosystem that supports a behaviorally driven, multivehicle nurturing campaign. The Dell 1:1 Global Marketing team worked with multiple partners to innovate integrations with Oracle Eloqua, Oracle Real-Time Decisions for real-time decision logic, and a content management system (CMS) that enables 100 percent customized e-mails. The program doubled average order values for nurtured contacts versus non-nurtured and tripled open and click-through rates versus push e-mail. It Wouldn’t Be an Oracle Marketing Cloud Event Without a Party!We’re hosting CX Central Fest: a unique customer experience specifically designed for attendees of CX Central. It will include a chance to rock out at a private concert featuring Los Angeles indie electronic pop group, Capital Cities! Join us Tuesday, September 30 from 7-9 p.m. Other Oracle Marketing Cloud Session Highlights Thought leadership by role Exploring the benefits of moving to the Cloud Product line roadmaps and innovations in Marketing Technical deep dives for product lines within Marketing Best practices and impactful business measurements Solutions that are integrated across CX Target AudienceSession content is geared toward professionals in Marketing, Marketing Operations, Marketing Demand Generation, Social: Chief Marketing Officers, Vice Presidents, Directors and Managers. OutcomesCustomers attending Marketing—CX Central @ OpenWorld will be able to: Gain insight into delivering consistent cross-channel marketing Discover how to provide the right information to the right customer at the right time and with the right channel Get answers to burning questions and advice on business challenges Hear from other Oracle customers about recommended best practices to help their organization move forward Network and share ideas to help create a strategy for connecting with customers in better ways Resources At a Glance Register Now Track Site—View Marketing Sessions 72 1024x768 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Focus on Session Doc Downloadable Justification Email OpenWorld is a fabulous way for you to see all that Oracle Marketing Cloud has to offer. Register today.

    Read the article

  • Travelling MVP #4: DevReach 2012

    - by DigiMortal
    Our next stop after Varna was Sofia where DevReach happens. DevReach is one of my favorite conferences in Europe because of sensible prices and strong speakers line-up. Also they have VIP-party after conference and this is good event to meet people you don’t see every day, have some discussion with speakers and find new friends. Our trip from Varna to Sofia took about 6.5 hours on bus. As I was tired from last evening it wasn’t problem for me as I slept half the trip. After smoking pause in Velike Tarnovo I watched movies from bus TV. We had supper later in city center Happy’s – place with good meat dishes and nice service. And next day it begun…. :) DevReach 2012 DevReach is held usually in Arena Mladost. It’s near airport and Telerik office. The event is organized by local MVP Martin Kulov together with Telerik. Two days of sessions with strong speakers is good reason enough for me to go to visit some event. Some topics covered by sessions: Windows 8 development web development SharePoint Windows Azure Windows Phone architecture Visual Studio Practically everybody can find some interesting session in every time slot. As the Arena is not huge it is very easy to go from one sessions to another if selected session for time slot is not what you expected. On the second floor of Arena there are many places where you can eat. There are simple chunk-food places like Burger King and also some restaurants. If you are hungry you will find something for your taste for sure. Also you can buy beer if it is too hot outside :) Weather was very good for October – practically Estonian summer – 25C and over. Sessions I visited Here is the list of sessions I visited at DevReach 2012: DevReach 2012 Opening & Welcome Messsage with Martin Kulov and Stephen Forte Principled N-Tier Solution Design with Steve Smith Data Patterns for the Cloud with Brian Randell .NET Garbage Collection Performance Tips with Sasha Goldshtein Building Secured, Scalable, Low-latency Web Applications with the Windows Azure Platform with Ido Flatow It’s a Knockout! MVVM Style Web Applications with Charles Nurse Web Application Architecture – Lessons Learned from Adobe Brackets with Brian Rinaldi Demystifying Visual Studio 2012 Performance Tools with Martin Kulov SPvNext – A Look At All the Exciting And New Features In SharePoint with Sahil Malik Portable Libraries – Why You Should Care with Lino Tadros I missed some sessions because of some death march projects that are going and that I have to coordinate but it was not big loss as I had time to walk around in session venue neighborhood and see Sofia Business Park. Next year again! I will be there again next year and hopefully more guys from Estonia will join me. I think it’s good idea to take short vacation for DevReach time and do things like we did this time – Bucharest, Varna, Sofia. It’s only good idea to plan some more free time so we are not very much in hurry and also we have no work stuff to do on the trip. This far this trip has been one of best trips I have organized and I will go and meet all those guys in this region again! :)

    Read the article

  • Guest (and occasional co-host) on Jesse Liberty's Yet Another Podcast

    - by Jon Galloway
    I was a recent guest on Jesse Liberty's Yet Another Podcast talking about the latest Visual Studio, ASP.NET and Azure releases. Download / Listen: Yet Another Podcast #75–Jon Galloway on ASP.NET/ MVC/ Azure Co-hosted shows: Jesse's been inviting me to co-host shows and I told him I'd show up when I was available. It's a nice change to be a drive-by co-host on a show (compared with the work that goes into organizing / editing / typing show notes for Herding Code shows). My main focus is on Herding Code, but it's nice to pop in and talk to Jesse's excellent guests when it works out. Some shows I've co-hosted over the past year: Yet Another Podcast #76–Glenn Block on Node.js & Technology in China Yet Another Podcast  #73 - Adam Kinney on developing for Windows 8 with HTML5 Yet Another Podcast #64 - John Papa & Javascript Yet Another Podcast #60 - Steve Sanderson and John Papa on Knockout.js Yet Another Podcast #54–Damian Edwards on ASP.NET Yet Another Podcast #53–Scott Hanselman on Blogging Yet Another Podcast #52–Peter Torr on Windows Phone Multitasking Yet Another Podcast #51–Shawn Wildermuth: //build, Xaml Programming & Beyond And some more on the way that haven't been released yet. Some of these I'm pretty quiet, on others I get wacky and hassle the guests because, hey, not my podcast so not my problem. Show notes from the ASP.NET / MVC / Azure show: What was just released Visual Studio 2012 Web Developer features ASP.NET 4.5 Web Forms Strongly Typed data controls Data access via command methods Similar Binding syntax to ASP.NET MVC Some context: Damian Edwards and WebFormsMVP Two questions from Jesse: Q: Are you making this harder or more complicated for Web Forms developers? Short answer: Nothing's removed, it's just a new option History of SqlDataSource, ObjectDataSource Q: If I'm using some MVC patterns, why not just move to MVC? Short answer: This works really well in hybrid applications, doesn't require a rewrite Allows sharing models, validation, other code between Web Forms and MVC ASP.NET MVC Adaptive Rendering (oh, also, this is in Web Forms 4.5 as well) Display Modes Mobile project template using jQuery Mobile OAuth login to allow Twitter, Google, Facebook, etc. login Jon (and friends') MVC 4 book on the way: Professional ASP.NET MVC 4 Windows 8 development Jesse and Jon announce they're working on a new book: Pro Windows 8 Development with XAML and C# Jon and Jesse agree that it's nice to be able to write Windows 8 applications using the same skills they picked up for Silverlight, WPF, and Windows Phone development. Compare / contrast ASP.NET MVC and Windows 8 development Q: Does ASP.NET and HTML5 development overlap? Jon thinks they overlap in the MVC world because you're writing HTML views without controls Jon describes how his web development career moved from a preoccupation with server code to a focus on user interaction, which occurs in the browser Jon mentions his NDC Oslo presentation on Learning To Love HTML as Beautiful Code Q: How do you apply C# / XAML or HTML5 skills to Windows 8 development? Q: If I'm a XAML programmer, what's the learning curve on getting up to speed on ASP.NET MVC? Jon describes the difference in application lifecycle and state management Jon says it's nice that web development is really interactive compared to application development Q: Can you learn MVC by reading a book? Or is it a lot bigger than that? What is Azure, and why would I use it? Jon describes the traditional Azure platform mode and how Azure Web Sites fits in Q: Why wouldn't Jesse host his blog on Azure Web Sites? Domain names on Azure Web Sites File hosting options Q: Is Azure just another host? How is it different from any of the other shared hosting options? A: Azure gives you the ability to scale up or down whenever you want A: Other services are available if or when you want them

    Read the article

< Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >