Search Results

Search found 325 results on 13 pages for 'visualize'.

Page 11/13 | < Previous Page | 7 8 9 10 11 12 13  | Next Page >

  • How do you make life easier for yourself when developing a really large database

    - by Hannes de Jager
    I am busy developing 2 web based systems with MySql databases and the amount of tables/views/stored routines is really becoming a lot and it is more and more challenging to handle the complexity. Now in programming languages we have namespacing e.g. Java packages, C++ namespaces to partition the software, grouping it together to make things more understandable. Databases on the other hand have more of a flat structure (MySql at least) e.g. tables and stored procedures are on the same level. So one have to be more creative, creating naming conventions, perhaps use more than one database or using tools to visualize things. What methods do you use to ease the pain? To be effective while developing your databases? To not get lost in a sea of tables and fields and stored procs? Feel free to mention tools you use also, but try to restrict it to open source and preferably Linux solutions if thats OK. b.t.w How many tables would a database have to be considered large in terms of design?

    Read the article

  • Sorting out POCO, Repository Pattern, Unit of Work, and ORM

    - by CoffeeAddict
    I'm reading a crapload on all these subjects: POCO Repository Pattern Unit of work Using an ORM mapper ok I see the basic definitions of each in books, etc. but I can't visualize this all together. Meaning an example structure (DL, BL, PL). So what, you have your DL objects that contain your CRUD methods, then your BL objects which are "mapped" using an ORM back to your DL objects? What about DTOs...they're your DL objects right? I'm confused. Can anyone really explain all this together or send me example code? I'm just trying to put this together. I am determining whether to go LINQ to SQL or EF 4 (not sure about NHibrernate yet). Just not getting the concepts as in physical layers and code layers here and what each type of object contains (just properties for DTOs, and CRUDs for your core DL classes that match the table fields???). I just need some guidance here. I'm reading Fowler's books and starting to read Evans but just not all there yet.

    Read the article

  • Details View and integration with TinyMCE <%@ Page validateRequest="false" %>

    - by GibboK
    I use TinyMCE in a DetailView in in EDIT MODE. I would like to know if there is a solution which can prevent Request Validation to trigger an error WITHOUT USING <%@ Page validateRequest="false" %> for my page. The only way I found out at the moment is to encode TextBox used by TinyMCE using option: "xml" tinyMCE.init({ encoding: "xml", In this way Request Validation does not trigger error but at the time to read the data in the TextBox the result it is Encoded. I also tried to Decode on PageLoad the content of the TextBox using this code myTextBox.Text = HttpUtility.HtmlDecode(myTextBox.Text) But the result is not as expected, so I can visualize it just Encoded text. Any Ideas? Thanks UPDATE I found a solution to my problem. I added in _DataBound event for the DetailsView this code TextBox myContentAuthor = (TextBox)uxAuthorListDetailsView.FindControl("uxContentAuthorInput"); myContentAuthor.Text = HttpUtility.HtmlDecode(myContentAuthor.Text); So on DataBound event, (should work even on post back) the content will be decodene for textbox tinymce. Here how should work: 01 - TinyMCE ESCAPE data inserted in textbox using function encoding: "xml", 02 - Data has been stored as ESCAPED 03 - To read the data and add its content to a TextBox where apply TinyMCE use in DATABOUND EVENT for DetailView and HttpUtility.HtmlDecode (so it will look decoded) 04 - You can modify content in the textbox in edit mode. On post back TinyMCE will encoded again using encoding: "xml" an so on Hope guys can help some one else. But please give me your comment on this solution thanks! Mybe you come up with more elegant solution! :-)

    Read the article

  • Transform only one axis to log10 scale with ggplot2

    - by daroczig
    I have the following problem: I would like to visualize a discrete and a continuous variable on a boxplot in which the latter has a few extreme high values. This makes the boxplot meaningless (the points and even the "body" of the chart is too small), that is why I would like to show this on a log10 scale. I am aware that I could leave out the extreme values from the visualization, but I am not intended to. Let's see a simple example with diamonds data: m <- ggplot(diamonds, aes(y = price, x = color)) The problem is not serious here, but I hope you could imagine why I would like to see the values at a log10 scale. Let's try it: m + geom_boxplot() + coord_trans(y = "log10") As you can see the y axis is log10 scaled and looks fine but there is a problem with the x axis, which makes the plot very strange. The problem do not occur with scale_log, but this is not an option for me, as I cannot use a custom formatter this way. E.g.: m + geom_boxplot() + scale_y_log10() My question: does anyone know a solution to plot the boxplot with log10 scale on y axis which labels could be freely formatted with a formatter function like in this thread? Editing the question to help answerers based on answers and comments: What I am really after: one log10 transformed axis (y) with not scientific labels. I would like to label it like dollar (formatter=dollar) or any custom format. If I try @hadley's suggestion I get the following warnings: > m + geom_boxplot() + scale_y_log10(formatter=dollar) Warning messages: 1: In max(x) : no non-missing arguments to max; returning -Inf 2: In max(x) : no non-missing arguments to max; returning -Inf 3: In max(x) : no non-missing arguments to max; returning -Inf With an unchanged y axis labels:

    Read the article

  • For business people to manage, keep binary images in MySQL or just the urls?

    - by Michael Mao
    Hello everyone: I am working on a task to enable image uploading and auto-scaling(from full sized to thumbnail) by jQuery & PHP. I can naturally come up with two approaches : First, store both images as binary objects directly into MySQL; Second, store only urls to the images and keep the images somewhere on server. The images are for everyone to view, so there are no security restrictions, as far as I know. Personally I don't have any preference, however, at the end of the day, it is the business people that are going to manage the images as part of the system(CRUD). So I am wondering which seems to be a bit better for them? Of course I am building a easy-to-use, visualize web interface for the staff to control the process, but I am not sure if that is enough. Lessons told me that if I don't think for the future and seek the most flexible approach, the I will probably screw myself sooner or later. PS. The following link is what I've found so far, which is pretty cool, no flash involved :) Andrew Valum's ajax image upload jQuery plugin

    Read the article

  • NetBeans Platform - how to refresh the property sheet view of a node?

    - by I82Much
    Hi all, I am using the PropertySheetView component to visualize and edit the properties of a node. This view should always reflect the most recent properties of the object; if there is a change to the object in another process, I want to somehow refresh the view and see the updated properties. The best way I was able to do this is something like the following (making use of EventBus library to publish and subscribe to changes in objects): public DomainObjectWrapperNode(DomainObject obj) { super (Children.LEAF, Lookups.singleton(obj)); EventBus.subscribe(DomainObject.class, this); } public void onEvent(DomainObject event) { // Do a check to determine if the updated object is the one wrapped by this node; // if so fire a property sets change firePropertySetsChange(null, this.getPropertySets()); } This works, but my place in the scrollpane is lost when the sheet refreshes; it resets the view to the top of the list and I have to scroll back down to where I was before the refresh action. So my question is, is there a better way to refresh the property sheet view of a node, specifically so my place in the property list is not lost upon refresh?

    Read the article

  • Running Different Modules on Tomcat/Server

    - by umesh awasthi
    Hi All, I have started working on my own project idea but on the starting i have strucked in the structure decision may be lack of knowledge is the promary factor. i am wondering how we can make 2 different modules to co-operate on the server.Here is is explanation of what i am trying to ask as per my design i need 2 modules for my application 1. Back end handling where i can do all content handling as well as other admin task a console which will be capable of handling everything from ceating importing contents to everything. 2 USer end which is only an interface to the end user to use the application. to visualize the things its kinda e-commerce application one is back office management and other is user end of web-shop. since these two module will be very much interlinked but i don't want them to mix up i want them to develop independently since admin is one which is core and it will also going to serve user interface. my question is how i can develop two module independently but on the other hand i want them to co-operate to accomplish the task as a whole. Really sorry in advance if i am not making any sense. Thanks in advance

    Read the article

  • Excel - Best Way to Connect With Access Data

    - by gamerzfuse
    Hello there, Here is the situation we have: a) I have an Access database / application that records a significant amount of data. Significant fields would be hours, # of sales, # of unreturned calls, etc b) I have an Excel document that connects to the Access database and pulls data in to visualize it As it stands now, the Excel file has a Refresh button that loads new data. The data is loaded into a large PivotTable. The main 'visual form' then uses VLOOKUP to get the results from the form, based on the related hours. This operation is slow (~10 seconds) and seems to be redundant and inefficient. Is there a better way to do this? I am willing to go just about any route - just need directions. Thanks in advance! Update: I have confirmed (due to helpful comments/responses) that the problem is with the data loading itself. removing all the VLOOKUPs only took a second or two out of the load time. So, the questions stands as how I can rapidly and reliably get the data without so much time involvement (it loads around 3000 records into the PivotTables).

    Read the article

  • Very simple python functions takes spends long time in function and not subfunctions

    - by John Salvatier
    I have spent many hours trying to figure what is going on here. The function 'grad_logp' in the code below is called many times in my program, and cProfile and runsnakerun the visualize the results reveals that the function grad_logp spends about .00004s 'locally' every call not in any functions it calls and the function 'n' spends about .00006s locally every call. Together these two times make up about 30% of program time that I care about. It doesn't seem like this is function overhead as other python functions spend far less time 'locally' and merging 'grad_logp' and 'n' does not make my program faster, but the operations that these two functions do seem rather trivial. Does anyone have any suggestions on what might be happening? Have I done something obviously inefficient? Am I misunderstanding how cProfile works? def grad_logp(self, variable, calculation_set ): p = params(self.p,self.parents) return self.n(variable, self.p) def n (self, variable, p ): gradient = self.gg(variable, p) return np.reshape(gradient, np.shape(variable.value)) def gg(self, variable, p): if variable is self: gradient = self._grad_logps['x']( x = self.value, **p) else: gradient = __builtin__.sum([self._pgradient(variable, parameter, value, p) for parameter, value in self.parents.iteritems()]) return gradient

    Read the article

  • When to define SDD operations System->Actor?

    - by devoured elysium
    I am having some trouble understanding how to make SDDs, as I don't fully grasp why in some cases one should define operations for System - Actor and in others don't. Here is an example: 1) The User tells the System that wants to buy some tickets, stating his client number. 2) The System confirms that the given client number is valid. 3) The User tells the System the movie that wants to see. 4) The System shows the set of available sessions and seats for that movie. 5) The System asks the user which session/seat he wants. 6) The user tells the System the chosen session/seat. This would be converted to: a) -----> tellClientNumber(clientNumber) b) <----- validClientNumber c) -----> tellMovieToSee(movie) d) <----- showsAvailableSeatsHours e) -----> tellSystemChosenSessionSeat(session, seat) I know that when we are dealing with SDD's we are still far away from coding. But I can't help trying to imagine how it how it would have been had I to convert it right away to code: I can understand 1) and 2). It's like if it was a C#/Java method with the following signature: boolean tellClientNumber(clientNumber) so I put both on the SDD. Then, we have the pair 3) 4). I can imagine that as something as: SomeDataStructureThatHoldsAvailableSessionsSeats tellSystemMovieToSee(movie) Now, the problem: From what I've come to understand, my lecturer says that we shouldn't make an operation on the SDD for 5) as we should only show operations from the Actor to the System and when the System is either presenting us data (as in c)) or validating sent data (such as in b)). I find this odd, as if I try to imagine this like a DOS app where you have to put your input sequencially, it makes sense to make an arrow even for 5). Why is this wrong? How should I try to visualize this? Thanks

    Read the article

  • Delphi: EInvalidOp in neural network class (TD-lambda)

    - by user89818
    I have the following draft for a neural network class. This neural network should learn with TD-lambda. It is started by calling the getRating() function. But unfortunately, there is an EInvalidOp (invalid floading point operation) error after about 1000 iterations in the following lines: neuronsHidden[j] := neuronsHidden[j]+neuronsInput[t][i]*weightsInput[i][j]; // input -> hidden weightsHidden[j][k] := weightsHidden[j][k]+LEARNING_RATE_HIDDEN*tdError[k]*eligibilityTraceOutput[j][k]; // adjust hidden->output weights according to TD-lambda Why is this error? I can't find the mistake in my code :( Can you help me? Thank you very much in advance! unit uNeuronalesNetz; interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, ExtCtrls, StdCtrls, Grids, Menus, Math; const NEURONS_INPUT = 43; // number of neurons in the input layer NEURONS_HIDDEN = 60; // number of neurons in the hidden layer NEURONS_OUTPUT = 1; // number of neurons in the output layer NEURONS_TOTAL = NEURONS_INPUT+NEURONS_HIDDEN+NEURONS_OUTPUT; // total number of neurons in the network MAX_TIMESTEPS = 42; // maximum number of timesteps possible (after 42 moves: board is full) LEARNING_RATE_INPUT = 0.25; // in ideal case: decrease gradually in course of training LEARNING_RATE_HIDDEN = 0.15; // in ideal case: decrease gradually in course of training GAMMA = 0.9; LAMBDA = 0.7; // decay parameter for eligibility traces type TFeatureVector = Array[1..43] of SmallInt; // definition of the array type TFeatureVector TArtificialNeuralNetwork = class // definition of the class TArtificialNeuralNetwork private // GENERAL SETTINGS START learningMode: Boolean; // does the network learn and change its weights? // GENERAL SETTINGS END // NETWORK CONFIGURATION START neuronsInput: Array[1..MAX_TIMESTEPS] of Array[1..NEURONS_INPUT] of Extended; // array of all input neurons (their values) for every timestep neuronsHidden: Array[1..NEURONS_HIDDEN] of Extended; // array of all hidden neurons (their values) neuronsOutput: Array[1..NEURONS_OUTPUT] of Extended; // array of output neurons (their values) weightsInput: Array[1..NEURONS_INPUT] of Array[1..NEURONS_HIDDEN] of Extended; // array of weights: input->hidden weightsHidden: Array[1..NEURONS_HIDDEN] of Array[1..NEURONS_OUTPUT] of Extended; // array of weights: hidden->output // NETWORK CONFIGURATION END // LEARNING SETTINGS START outputBefore: Array[1..NEURONS_OUTPUT] of Extended; // the network's output value in the last timestep (the one before) eligibilityTraceHidden: Array[1..NEURONS_INPUT] of Array[1..NEURONS_HIDDEN] of Array[1..NEURONS_OUTPUT] of Extended; // array of eligibility traces: hidden layer eligibilityTraceOutput: Array[1..NEURONS_TOTAL] of Array[1..NEURONS_TOTAL] of Extended; // array of eligibility traces: output layer reward: Array[1..MAX_TIMESTEPS] of Array[1..NEURONS_OUTPUT] of Extended; // the reward value for all output neurons in every timestep tdError: Array[1..NEURONS_OUTPUT] of Extended; // the network's error value for every single output neuron t: Byte; // current timestep cyclesTrained: Integer; // number of cycles trained so far (learning rates could be decreased accordingly) last50errors: Array[1..50] of Extended; // LEARNING SETTINGS END public constructor Create; // create the network object and do the initialization procedure UpdateEligibilityTraces; // update the eligibility traces for the hidden and output layer procedure tdLearning; // learning algorithm: adjust the network's weights procedure ForwardPropagation; // propagate the input values through the network to the output layer function getRating(state: TFeatureVector; explorative: Boolean): Extended; // get the rating for a given state (feature vector) function HyperbolicTangent(x: Extended): Extended; // calculate the hyperbolic tangent [-1;1] procedure StartNewCycle; // start a new cycle with everything set to default except for the weights procedure setLearningMode(activated: Boolean=TRUE); // switch the learning mode on/off procedure setInputs(state: TFeatureVector); // transfer the given feature vector to the input layer (set input neurons' values) procedure setReward(currentReward: SmallInt); // set the reward for the current timestep (with learning then or without) procedure nextTimeStep; // increase timestep t function getCyclesTrained(): Integer; // get the number of cycles trained so far procedure Visualize(imgHidden: Pointer); // visualize the neural network's hidden layer end; implementation procedure TArtificialNeuralNetwork.UpdateEligibilityTraces; var i, j, k: Integer; begin // how worthy is a weight to be adjusted? for j := 1 to NEURONS_HIDDEN do begin for k := 1 to NEURONS_OUTPUT do begin eligibilityTraceOutput[j][k] := LAMBDA*eligibilityTraceOutput[j][k]+(neuronsOutput[k]*(1-neuronsOutput[k]))*neuronsHidden[j]; for i := 1 to NEURONS_INPUT do begin eligibilityTraceHidden[i][j][k] := LAMBDA*eligibilityTraceHidden[i][j][k]+(neuronsOutput[k]*(1-neuronsOutput[k]))*weightsHidden[j][k]*neuronsHidden[j]*(1-neuronsHidden[j])*neuronsInput[t][i]; end; end; end; end; procedure TArtificialNeuralNetwork.setReward; VAR i: Integer; begin for i := 1 to NEURONS_OUTPUT do begin // +1 = player A wins // 0 = draw // -1 = player B wins reward[t][i] := currentReward; end; end; procedure TArtificialNeuralNetwork.tdLearning; var i, j, k: Integer; begin if learningMode then begin for k := 1 to NEURONS_OUTPUT do begin if reward[t][k] = 0 then begin tdError[k] := GAMMA*neuronsOutput[k]-outputBefore[k]; // network's error value when reward is 0 end else begin tdError[k] := reward[t][k]-outputBefore[k]; // network's error value in the final state (reward received) end; for j := 1 to NEURONS_HIDDEN do begin weightsHidden[j][k] := weightsHidden[j][k]+LEARNING_RATE_HIDDEN*tdError[k]*eligibilityTraceOutput[j][k]; // adjust hidden->output weights according to TD-lambda for i := 1 to NEURONS_INPUT do begin weightsInput[i][j] := weightsInput[i][j]+LEARNING_RATE_INPUT*tdError[k]*eligibilityTraceHidden[i][j][k]; // adjust input->hidden weights according to TD-lambda end; end; end; end; end; procedure TArtificialNeuralNetwork.ForwardPropagation; var i, j, k: Integer; begin for j := 1 to NEURONS_HIDDEN do begin neuronsHidden[j] := 0; for i := 1 to NEURONS_INPUT do begin neuronsHidden[j] := neuronsHidden[j]+neuronsInput[t][i]*weightsInput[i][j]; // input -> hidden end; neuronsHidden[j] := HyperbolicTangent(neuronsHidden[j]); // activation of hidden neuron j end; for k := 1 to NEURONS_OUTPUT do begin neuronsOutput[k] := 0; for j := 1 to NEURONS_HIDDEN do begin neuronsOutput[k] := neuronsOutput[k]+neuronsHidden[j]*weightsHidden[j][k]; // hidden -> output end; neuronsOutput[k] := HyperbolicTangent(neuronsOutput[k]); // activation of output neuron k end; end; procedure TArtificialNeuralNetwork.setLearningMode; begin learningMode := activated; end; constructor TArtificialNeuralNetwork.Create; var i, j, k: Integer; begin inherited Create; Randomize; // initialize random numbers generator learningMode := TRUE; cyclesTrained := -2; // only set to -2 because it will be increased twice in the beginning StartNewCycle; for j := 1 to NEURONS_HIDDEN do begin for k := 1 to NEURONS_OUTPUT do begin weightsHidden[j][k] := abs(Random-0.5); // initialize weights: 0 <= random < 0.5 end; for i := 1 to NEURONS_INPUT do begin weightsInput[i][j] := abs(Random-0.5); // initialize weights: 0 <= random < 0.5 end; end; for i := 1 to 50 do begin last50errors[i] := 0; end; end; procedure TArtificialNeuralNetwork.nextTimeStep; begin t := t+1; end; procedure TArtificialNeuralNetwork.StartNewCycle; var i, j, k, m: Integer; begin t := 1; // start in timestep 1 cyclesTrained := cyclesTrained+1; // increase the number of cycles trained so far for j := 1 to NEURONS_HIDDEN do begin neuronsHidden[j] := 0; for k := 1 to NEURONS_OUTPUT do begin eligibilityTraceOutput[j][k] := 0; outputBefore[k] := 0; neuronsOutput[k] := 0; for m := 1 to MAX_TIMESTEPS do begin reward[m][k] := 0; end; end; for i := 1 to NEURONS_INPUT do begin for k := 1 to NEURONS_OUTPUT do begin eligibilityTraceHidden[i][j][k] := 0; end; end; end; end; function TArtificialNeuralNetwork.getCyclesTrained; begin result := cyclesTrained; end; procedure TArtificialNeuralNetwork.setInputs; var k: Integer; begin for k := 1 to NEURONS_INPUT do begin neuronsInput[t][k] := state[k]; end; end; function TArtificialNeuralNetwork.getRating; begin setInputs(state); ForwardPropagation; result := neuronsOutput[1]; if not explorative then begin tdLearning; // adjust the weights according to TD-lambda ForwardPropagation; // calculate the network's output again outputBefore[1] := neuronsOutput[1]; // set outputBefore which will then be used in the next timestep UpdateEligibilityTraces; // update the eligibility traces for the next timestep nextTimeStep; // go to the next timestep end; end; function TArtificialNeuralNetwork.HyperbolicTangent; begin if x > 5500 then // prevent overflow result := 1 else result := (Exp(2*x)-1)/(Exp(2*x)+1); end; end.

    Read the article

  • What library to choose to build a user interface for a C++ software that uses SDL

    - by Barth
    Dear all, I have a simulation software (C++) that runs on the command line. It is platform independent (currently compiling and running on Windows, MacOS X and Linux). When the simulation ends, I visualize the result with SDL; it is a very basic 2d view, mainly color squares next to each other. I would like to have a user interface on top of the simulation so that I can start and pause the simulation, and change the parameters on the fly. Something pretty simple I guess. Well, ideally I will also add a grapher somewhere to see the evolution over time of some parameters. Now, I am wondering what direction I should go. Should I try to use one of the UI libraries for SDL ? Or maybe wxwidget in conjunction with SDL ? Or simply wxwidget and get rid of SDL ? Do you have any experience with this ? Thanks in advance Barth PS: I tried to use AGAR, a SDL UI library. It seemed very promising but I couldn't get it working. Not even the helloworld.

    Read the article

  • Coordinate system problem with the grid control

    - by Jason94
    In my WPF application im trying to visualize some temperature data. I have a list of temperatures for the 7 past days and want to make a point to point line diagram. My problem is with the different koordinatesystems and adjusting data to the grid. XAML: <Grid Height="167" HorizontalAlignment="Left" Margin="6,6,0,0" Name="grid1" VerticalAlignment="Bottom" Width="455" /> C# (draft): http://pastebin.com/6UWkMFj1 scale is a global variable that changes with a slider (1-10). How to i correct my application so the line always is centered? As it is now it starts out centeded but if i crank up the slider to 3-4 the line goes up and above the applicationwindow. I also would like to use the full height of the grid window not just a small piece like images below: http://img32.imageshack.us/i/002wtvu.jpg/ http://img691.imageshack.us/i/001tqco.jpg/ As you can see i have worked out my data so day 1 with temperature 62 F is lower then day 2 with temperature of 76 F but i have scaling issues and placementissues... could somebody straighten out my math? :-)

    Read the article

  • JSF2: Re-render all components on page that have a given ID, without absolute paths

    - by tlind
    Is there any way in JSF 2.0/PrimeFaces of re-rendering all components (using the PrimeFaces update="id1 id2..." attribute or the <f:ajax render="..."/> tag) that have got a given ID, regardless of whether they are in the same form that contains the button triggering the AJAX re-render or not? For example, I want my button to re-render all sections on a page that visualize the user's current shopping basket. Right now, I always have to specify the absolute path to the components that I want to get updated, e.g. update=":header:basket :left-sidebar:menu:basket" which is rather impractical if the structure of the page changes (besides, I have not been able to figure out the correct path for one of these components). I already tried to implement a custom EL function like this, which traverses the component tree: update="{utilBean.findAllComponentsMatchingId('basket')}" but at the time that function is evaluated, apparently not the entire component tree has been set up as it doesn't contain the components I am looking for. How can I deal with this? There certainly must be an easy way of doing AJAX-based updates of sections of the page that are not part of the current <h:form>? Thanks!

    Read the article

  • When to define SDD(System Sequence Diagram) operations System->Actor?

    - by devoured elysium
    I am having some trouble understanding how to make System Sequence Diagrams, as I don't fully grasp why in some cases one should define operations for System - Actor and in others don't. Here is an example: Let's assume the System is a Cinema Ticket Store and the Actor is a client that wants to buy a ticket. 1) The User tells the System that wants to buy some tickets, stating his client number. 2) The System confirms that the given client number is valid. 3) The User tells the System the movie that wants to see. 4) The System shows the set of available sessions and seats for that movie. 5) The System asks the user which session/seat he wants. 6) The User tells the System the chosen session/seat. This would be converted to: a) -----> tellClientNumber(clientNumber) b) <----- validClientNumber c) -----> tellMovieToSee(movie) d) <----- showsAvailableSeatsHours e) -----> tellSystemChosenSessionSeat(session, seat) I know that when we are dealing with SDD's we are still far away from coding. But I can't help trying to imagine how it how it would have been had I to convert it right away to code: I can understand 1) and 2). It's like if it was a C#/Java method with the following signature: boolean tellClientNumber(clientNumber) so I put both on the SDD. Then, we have the pair 3) 4). I can imagine that as something as: SomeDataStructureThatHoldsAvailableSessionsSeats tellSystemMovieToSee(movie) Now, the problem: From what I've come to understand, my lecturer says that we shouldn't make an operation on the SDD for 5) as we should only show operations from the Actor to the System and when the System is either presenting us data (as in c)) or validating sent data (such as in b)). I find this odd, as if I try to imagine this like a DOS app where you have to put your input sequencially, it makes sense to make an arrow even for 5). Why is this wrong? How should I try to visualize this? Thanks

    Read the article

  • Detailstext in TableView from Coredata Iphone

    - by user554867
    Hey, that is my first question here at stackoverflow. Let me try to be specific as I can. The project is as follows: I am trying to parse an xml file in an iphone app, which works already. I am saving the parsed data into the CoreData of the Iphone. So far so good: I have two elements of the xml file which I want to show in the tableview as the text and the detailtext. Now the strange behaviour occurs: If I take the data of the Core Data and trying to visualize them in the tableview, then both elements are displayed in seperate cells and not as detailtext of the element. If I am just taking a constant string for the detailtext, then it works, but if I am trying to take for every cell a specific element from the Core Data then it is shown seperately in the tableview. I googled a lot, reading here and there. I dont know which code I should show because I dont have a clue where exactly the mistake could be. Maybe someone can answer that immediately because it is a stupid mistake somewhere, which is common. Thanks for your help.

    Read the article

  • How do I call C++/CLI (.NET) DLLs from standard, unmanaged non-.NET applications?

    - by tronjohnson
    In the unmanaged world, I was able to write a __declspec(dllexport) or, alternatively, use a .DEF file to expose a function to be able to call a DLL. (Because of name mangling in C++ for the __stdcall, I put aliases into the .DEF file so certain applications could re-use certain exported DLL functions.) Now, I am interested in being able to expose a single entry-point function from a .NET assembly, in unmanaged-fashion, but have it enter into .NET-style functions within the DLL. Is this possible, in a simple and straight-forward fashion? What I have is a third-party program that I have extended through DLLs (plugins) that implement some complex mathematics. However, the third-party program has no means for me to visualize the calculations. I want to somehow take these pre-written math functions, compile them into a separate DLL (but using C++/CLI in .NET), but then add hooks to the functions so I can render what's going on under the hood in a .NET user control. I'm not sure how to blend the .NET stuff with the unmanaged stuff, or what to Google to accomplish this task. Specific suggestions with regard to the managed/unmanaged bridge, or alternative methods to accomplish the rendering in the manner I have described would be helpful. Thank you.

    Read the article

  • March 21st Links: ASP.NET, ASP.NET MVC, AJAX, Visual Studio, Silverlight

    - by ScottGu
    Here is the latest in my link-listing series. If you haven’t already, check out this month’s "Find a Hoster” page on the www.asp.net website to learn about great (and very inexpensive) ASP.NET hosting offers.  [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] ASP.NET URL Routing in ASP.NET 4: Scott Mitchell has a nice article that talks about the new URL routing features coming to Web Forms applications with ASP.NET 4.  Also check out my previous blog post on this topic. Control of Web Control ClientID Values in ASP.NET 4: Scott Mitchell has a nice article that describes how it is now easy to control the client “id” value emitted by server controls with ASP.NET 4. Web Deployment Made Awesome: Very nice MIX10 talk by Scott Hanselman on the new web deployment features coming with VS 2010, MSDeploy, and .NET 4.  Makes deploying web applications much, much easier. ASP.NET 4’s Browser Capabilities Support: Nice blog post by Stephen Walther that talks about the new browser definition capabilities support coming with ASP.NET 4. Integrating Twitter into an ASP.NET Website: Nice article by Scott Mitchell that demonstrates how to call and integrate Twitter from within your ASP.NET applications. Improving CSS with .LESS: Nice article by Scott Mitchell that describes how to optimize CSS using .LESS – a free, open source library. ASP.NET MVC Upgrading ASP.NET MVC 1 applications to ASP.NET MVC 2: Eilon Lipton from the ASP.NET team has a nice post that describes how to easily upgrade your ASP.NET MVC 1 applications to ASP.NET MVC 2.  He has an automated tool that makes this easy. Note that automated MVC upgrade support is also built-into VS 2010.  Use the tool in this blog post for updating existing MVC projects using VS 2008. Advanced ASP.NET MVC 2: Nice video talk by Brad Wilson of the ASP.NET MVC team.  In it he describes some of the more advanced features in ASP.NET MVC 2 and how to maximize your productivity with them. Dynamic Select Lists with ASP.NET MVC and jQuery: Michael Ceranski has a nice blog post that describes how to dynamically populate dropdownlists on the client using AJAX. AJAX Microsoft AJAX Minifier: We recently shipped an updated minifier utility that allows you to shrink/minify both JavaScript and CSS files – which can improve the performance of your web applications.  You can run this either manually as a command-line tool or now automatically integrate it using a Visual Studio build task.  You can download it for free here. Visual Studio VS 2010 Tip: Quickly Closing Documents: Nice blog post that describes some techniques for optimizing how windows are closed with the new VS 2010 IDE. Collpase to Definitions with Outlining: Nice tip from Zain on how to collapse your code editor to outline mode using Ctrl + M, Ctrl + O.  Also check out his post on copy/paste with outlining here. $299 VS 2010 Upgrade Offer for VS 2005/2008 Standard Users: Soma blogs about a nice VS 2010 upgrade offer you can take advantage of if you have VS 2005 or VS 2008 Standard editions.  For $299 you can upgrade to VS 2010 Professional edition. Dependency Graphics: Jason Zander (who runs the VS team) has a nice blog post that covers the new dependency graph support within VS 2010.  This makes it easier to visualize the dependencies within your application.  Also check out this video here. Layer Validation: Jason Zander has a nice blog post that talks about the new layer validation features in VS 2010.  This enables you to enforce cleaner layering within your projects and solutions.  VS 2010 Profiler Blog: The VS 2010 Profiler Team has their own blog and on it you can find a bunch of nice posts from the last few months that talk about a lot of the new features coming with VS 2010’s Profiler support.  Some really nice features coming. Silverlight Silverlight 4 Training Course: Nice free set of training courses from Microsoft that can help bring you up to speed on all of the new Silverlight 4 features and how to build applications with them.  Updated and current with the recently released Silverlight 4 RC build and tools. Getting Started with Silverlight and Windows Phone 7 Development: Nice blog post by Tim Heuer that summarizes how to get started building Windows Phone 7 applications using Silverlight.  Also check out my blog post from last week on how to build a Windows Phone 7 Twitter application using Silverlight. A Guide to What Has Changed with the Silverlight 4 RC: Nice summary post by Tim Heuer that describes all of the things that have changed between the Silverlight 4 Beta and the Silverlight 4 RC. Path Based Layout - Part 1 and Part 2: Christian Schormann has a nice blog post about a really cool new feature in Expression Blend 4 and Silverlight 4 called Path Layout. Also check out Andy Beaulieu’s blog post on this. Hope this helps, Scott

    Read the article

  • Oracle ADF Coverage at OOW

    - by Frank Nimphius
    Below is the schedule for all ADF related sessions at a glance. Note the Meet and greet session added for Wednesday Octiber 3rd from 4.30 pm to 5:30. Oracle ADF and Fusion Development General Session Mon 1 Oct, 2012 Time Title Location 10:45 AM - 11:45 AM General Session: The Future of Development for Oracle Fusion—From Desktop to Mobile to Cloud Marriott Marquis - Salon 8 12:15 PM - 1:15 PM General Session: Extend Oracle Fusion Apps to Tablets/Smartphones with Oracle Mobile Technology Moscone West - 3014 1:45 PM - 2:45 PM General Session: Extend Oracle Applications to Mobile Devices with Oracle’s Mobile Technologies Moscone West - 3002/3004 4:45 PM - 5:45 PM General Session: Building Mobile Applications with Oracle Cloud Moscone West - 2002/2004 Conference Session Mon 1 Oct, 2012 Time Title Location 12:15 PM - 1:15 PM Understanding Oracle ADF and Its Role in Oracle Fusion Moscone South - 306 1:45 PM - 2:45 PM Building Performant Oracle ADF Business Components to Meet Tomorrow’s Needs Marriott Marquis - Golden Gate C3 3:15 PM - 4:15 PM End-to-End Oracle ADF Development in Eclipse Marriott Marquis - Golden Gate C3 4:45 PM - 5:45 PM Classic Mistakes with Oracle Application Development Framework Marriott Marquis - Salon 7 Tues 2 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM One Size Doesn’t Fit All: Oracle ADF Architecture Fundamentals Marriott Marquis - Golden Gate C2 10:15 AM - 11:15 AM Oracle Business Process Management/Oracle ADF Integration Best Practices Marriott Marquis - Golden Gate C3 11:45 AM - 12:45 PM Mobile-Enable Oracle Fusion Middleware and Enterprise Applications with Oracle ADF Moscone South - 306 11:45 AM - 12:45 PM Secrets of Successful Projects with Oracle Application Development Framework Marriott Marquis - Golden Gate C2 1:15 PM - 2:15 PM Develop On-Device iPhone and iPad Apps Without Writing Any Objective-C Code Marriott Marquis - Golden Gate C2 1:15 PM - 2:15 PM BPM, SOA, and Oracle ADF Combined: Patterns Learned from Oracle Fusion Applications Moscone West - 3003 1:15 PM - 2:15 PM The Future of Forms Is … Oracle Forms (and Friends) Moscone South - 306 5:00 PM - 6:00 PM Best Practices for Integrating SOAP and REST Service into Oracle ADF Marriott Marquis - Golden Gate C2 Wed 3 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM Mobile Apps for Oracle E-Business Suite with Oracle ADF Mobile and Oracle SOA Suite Moscone West - 3001 10:15 AM - 11:15 AM Visualize This! Best Practices for Data Visualization in Desktop and Mobile Apps Marriott Marquis - Golden Gate C3 10:15 AM - 11:15 AM Set Up Your Oracle ADF Project and Development Team for Productivity: Seven Essential Tips Marriott Marquis - Golden Gate C2 11:45 AM - 12:45 PM How to Migrate an Oracle Forms Application to Oracle ADF Marriott Marquis - Golden Gate C2 1:15 PM - 2:15 PM Oracle ADF: Lessons Learned in Real-World Implementations Moscone South - 309 3:30 PM - 4:30 PM Oracle ADF Implementations Around the Globe: Best Practices Marriott Marquis - Golden Gate C2 3:30 PM - 4:30 PM Oracle Developer Cloud Services Marriott Marquis - Salon 7 4:30 PM - 5:30 PM Oracle JDeveloper and Oracle ADF: What’s New Hilton San Francisco - Continental Ballroom 5 5:00 PM - 6:00 PM Mobile Solutions for Oracle E-Business Suite Applications: Technical Insight Moscone West - 2020 5:00 PM - 6:00 PM Extending Social into Enterprise Applications and Business Processes Marriott Marquis - Golden Gate C3 5:00 PM - 6:00 PM The Tie That Binds: An Introduction to Oracle ADF Bindings Marriott Marquis - Golden Gate C2 Thur 4 Oct, 2012 Time Title Location 11:15 AM - 12:15 PM Using Oracle ADF with Oracle E-Business Suite: The Full Integration View Moscone West - 3003 11:15 AM - 12:15 PM Deep Dive into Oracle ADF: Advanced Techniques Marriott Marquis - Golden Gate C2 12:45 PM - 1:45 PM Monitor, Analyze, and Troubleshoot Your Oracle ADF Application Marriott Marquis - Golden Gate C2 2:15 PM - 3:15 PM Oracle WebCenter Portal: Creating and Using Content Presenter Templates Marriott Marquis - Golden Gate C2 HOL (Hands-on Lab) Mon 1 Oct, 2012 Time Title Location 10:45 AM - 11:45 AM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 1:45 PM - 2:45 PM Build Mobile Applications for Oracle E-Business Suite Marriott Marquis - Salon 10A 3:15 PM - 4:15 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 3:15 PM - 4:15 PM Introduction to Oracle ADF: Hands-on Lab Marriott Marquis - Salon 3/4 4:45 PM - 5:45 PM Application Lifecycle Management with Oracle JDeveloper: Hands-on Lab Marriott Marquis - Salon 3/4 Tues 2 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 5:00 PM - 6:00 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A Wed 3 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM Introduction to Oracle ADF: Hands-on Lab Marriott Marquis - Salon 3/4 11:45 AM - 12:45 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 1:15 PM - 2:15 PM Build Mobile Applications for Oracle E-Business Suite Marriott Marquis - Salon 10A 3:30 PM - 4:30 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 5:00 PM - 6:00 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A Thur 4 Oct, 2012 Time Title Location 11:15 AM - 12:15 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 11:15 AM - 12:15 PM Introduction to Oracle ADF: Hands-on Lab Marriott Marquis - Salon 3/4 12:45 PM - 1:45 PM Oracle ADF for Java EE Developers with Oracle Enterprise Pack for Eclipse Marriott Marquis - Salon 3/4 BOF (Birds-of-a-Feather) Mon 1 Oct, 2012 Time Title Location 6:15 PM - 7:00 PM How to Get Started with Oracle ADF Marriott Marquis - Club Room 7:15 PM - 8:00 PM Building Next-Generation Applications with Oracle ADF and Oracle BPM Marriott Marquis - Golden Gate C3 7:15 PM - 8:00 PM The Future of Oracle Forms: Upgrade, Modernize, or Migrate? Marriott Marquis - Golden Gate C2 7:15 PM - 8:00 PM Oracle ADF Faces: One Site for Many Devices Marriott Marquis - Golden Gate C1 - User Group Forum (Sunday Only) Sun 30 Sept, 2012 Time Title Location 9:00 AM - 10:00 AM Oracle ADF Immersion: How an Oracle Forms Developer Immersed Himself in the Oracle ADF World Moscone South - 305 10:15 AM - 11:15 AM Deploy with Joy: Using Hudson to Build and Deploy Your Oracle ADF Applications Moscone South - 305 11:30 AM - 12:30 PM ADF EMG User Group: A Peek into the Oracle ADF Architecture of Oracle Fusion Applications Moscone South - 305 12:45 PM - 3:45 PM ADF EMG User Group: Oracle Fusion Middleware Live Application Development Demo Moscone South - 305 3:15 PM - 4:15 PM Mobile Development with Oracle JDeveloper and Oracle ADF Moscone West - 2010 Demos Demo Location Developer Moscone North, Upper Lobby - N-002 Oracle ADF Mobile Development Moscone North, Upper Lobby - N-001 Oracle Eclipse Projects Hilton San Francisco, Grand Ballroom - HHJ-008 Oracle Enterprise Pack for Eclipse Moscone South, Right - S-208 Oracle JDeveloper and Oracle ADF Moscone South, Right - S-207 Exhibits 0 Exhibitor Location Accenture Moscone South - 1813 Moscone South - 2221 Infosys Moscone South - 1701 Moscone South - SMR-005 Innowave Technology Moscone South - 2309 ODTUG Moscone West, Level 2 Lobby - Kiosk in the User Groups Pavilion Oracle ADF Developers Meet Up Wednesday, Oct 03 Time Activity Location 4:30 PM - 5:30 PM Stop by the OTN Lounge and meet other Oracle ADF & Fusion developers as well as product managers and engineers who work on Oracle ADF, ADF Mobile and ADF Essentials. Feedback and questions welcome, or simply stop by and say ‘hi!’ and enjoy free beer. OTN Lounge

    Read the article

  • ASP.NET MVC: Converting business objects to select list items

    - by DigiMortal
    Some of our business classes are used to fill dropdown boxes or select lists. And often you have some base class for all your business classes. In this posting I will show you how to use base business class to write extension method that converts collection of business objects to ASP.NET MVC select list items without writing a lot of code. BusinessBase, BaseEntity and other base classes I prefer to have some base class for all my business classes so I can easily use them regardless of their type in contexts I need. NB! Some guys say that it is good idea to have base class for all your business classes and they also suggest you to have mappings done same way in database. Other guys say that it is good to have base class but you don’t have to have one master table in database that contains identities of all your business objects. It is up to you how and what you prefer to do but whatever you do – think and analyze first, please. :) To keep things maximally simple I will use very primitive base class in this example. This class has only Id property and that’s it. public class BaseEntity {     public virtual long Id { get; set; } } Now we have Id in base class and we have one more question to solve – how to better visualize our business objects? To users ID is not enough, they want something more informative. We can define some abstract property that all classes must implement. But there is also another option we can use – overriding ToString() method in our business classes. public class Product : BaseEntity {     public virtual string SKU { get; set; }     public virtual string Name { get; set; }       public override string ToString()     {         if (string.IsNullOrEmpty(Name))             return base.ToString();           return Name;     } } Although you can add more functionality and properties to your base class we are at point where we have what we needed: identity and human readable presentation of business objects. Writing list items converter Now we can write method that creates list items for us. public static class BaseEntityExtensions {            public static IEnumerable<SelectListItem> ToSelectListItems<T>         (this IList<T> baseEntities) where T : BaseEntity     {         return ToSelectListItems((IEnumerator<BaseEntity>)                    baseEntities.GetEnumerator());     }       public static IEnumerable<SelectListItem> ToSelectListItems         (this IEnumerator<BaseEntity> baseEntities)     {         var items = new HashSet<SelectListItem>();           while (baseEntities.MoveNext())         {             var item = new SelectListItem();             var entity = baseEntities.Current;               item.Value = entity.Id.ToString();             item.Text = entity.ToString();               items.Add(item);         }           return items;     } } You can see here to overloads of same method. One works with List<T> and the other with IEnumerator<BaseEntity>. Although mostly my repositories return IList<T> when querying data there are always situations where I can use more abstract types and interfaces. Using extension methods in code In your code you can use ToSelectListItems() extension methods like shown on following code fragment. ... var model = new MyFormModel(); model.Statuses = _myRepository.ListStatuses().ToSelectListItems(); ... You can call this method on all your business classes that extend your base entity. Wanna have some fun with this code? Write overload for extension method that accepts selected item ID.

    Read the article

  • Create a Social Community of Trust Along With Your Federal Digital Services Governance

    - by TedMcLaughlan
    The Digital Services Governance Recommendations were recently released, supporting the US Federal Government's Digital Government Strategy Milestone Action #4.2 to establish agency-wide governance structures for developing and delivering digital services. Figure 1 - From: "Digital Services Governance Recommendations" While extremely important from a policy and procedure perspective within an Agency's information management and communications enterprise, these recommendations only very lightly reference perhaps the most important success enabler - the "Trusted Community" required for ultimate usefulness of the services delivered. By "ultimate usefulness", I mean the collection of public, transparent properties around government information and digital services that include social trust and validation, social reach, expert respect, and comparative, standard measures of relative value. In other words, do the digital services meet expectations of the public, social media ecosystem (people AND machines)? A rigid governance framework, controlling by rules, policies and roles the creation and dissemination of digital services may meet the expectations of direct end-users and most stakeholders - including the agency information stewards and security officers. All others who may share comments about the services, write about them, swap or review extracts, repackage, visualize or otherwise repurpose the output for use in entirely unanticipated, social ways - these "stakeholders" will not be governed, but may observe guidance generated by a "Trusted Community". As recognized members of the trusted community, these stakeholders may ultimately define the right scope and detail of governance that all other users might observe, promoting and refining the usefulness of the government product as the social ecosystem expects. So, as part of an agency-centric governance framework, it's advised that a flexible governance model be created for stewarding a "Community of Trust" around the digital services. The first steps follow the approach outlined in the Recommendations: Step 1: Gather a Core Team In addition to the roles and responsibilities described, perhaps a set of characteristics and responsibilities can be developed for the "Trusted Community Steward/Advocate" - i.e. a person or team who (a) are entirely cognizant of and respected within the external social media communities, and (b) are trusted both within the agency and outside as practical, responsible, non-partisan communicators of useful information. The may seem like a standard Agency PR/Outreach team role - but often an agency or stakeholder subject matter expert with a public, active social persona works even better. Step 2: Assess What You Have In addition to existing, agency or stakeholder decision-making bodies and assets, it's important to take a PR/Marketing view of the social ecosystem. How visible are the services across the social channels utilized by current or desired constituents of your agency? What's the online reputation of your agency and perhaps the service(s)? Is Search Engine Optimization (SEO) a facet of external communications/publishing lifecycles? Who are the public champions, instigators, value-adders for the digital services, or perhaps just influential "communicators" (i.e. with no stake in the game)? You're essentially assessing your market and social presence, and identifying the actors (including your own agency employees) in the existing community of trust. Step 3: Determine What You Want The evolving Community of Trust will most readily absorb, support and provide feedback regarding "Core Principles" (Element B of the "six essential elements of a digital services governance structure") shared by your Agency, and obviously play a large, though probably very unstructured part in Element D "Stakeholder Input and Participation". Plan for this, and seek input from the social media community with respect to performance metrics - these should be geared around the outcome and growth of the trusted communities actions. How big and active is this community? What's the influential reach of this community with respect to particular messaging or campaigns generated by the Agency? What's the referral rate TO your digital services, FROM channels owned or operated by members of this community? (this requires governance with respect to content generation inclusive of "markers" or "tags"). At this point, while your Agency proceeds with steps 4 ("Build/Validate the Governance Structure") and 5 ("Share, Review, Upgrade"), the Community of Trust might as well just get going, and start adding value and usefulness to the existing conversations, existing data services - loosely though directionally-stewarded by your trusted advocate(s). Why is this an "Enterprise Architecture" topic? Because it's increasingly apparent that a Public Service "Enterprise" is not wholly contained within Agency facilities, firewalls and job titles - it's also manifested in actual, perceived or representative forms outside the walls, on the social Internet. An Agency's EA model and resulting investments both facilitate and are impacted by the "Social Enterprise". At Oracle, we're very active both within our Enterprise and outside, helping foster social architectures that enable truly useful public services, digital or otherwise.

    Read the article

  • Branching and Merging Improvements in TFS2010

    - by jehan
    Introducing the concept of “first class branches” is a significant improvement as part of the 2010 release with respect to version control.  Not only does it help to distinguish between folders and branches, but it enables branch visualizations. Let us see improvements in detail. ·         In TFS2008, you don’t know which of the folders are Branches: All folders looks the same, all have the folder icon. Now, In TFS 2010 there is a new icon that shows which of the folder is a Branch.       ·      There is no visual means to manage branches in TFS2008:   You dont have any means to identify which branches are related and the relation type. Now, In TFS 2010 you have visual tools to see the Branches Hierarchy. In order to see a Branch Hierarchy just Right Click the Branch and choose: Branching and Merging –> View Hierarchy     ·         In TFS2008, there is no option to track changes path between the Branches:  If you have made a merge in a Branch you can’t track from which Branch this Merge came from. Now, you have the tools that shows the path of change between the Branches, you can also see where change was added on a timeline.  In order to track a change do the following: Step1: Right click the Branch and click View History   Step 2: Choose a changeset to track and click the “Track Changeset” button.     Step 3: Choose the branches that will be in the view and click “Visualize”. In above visual, you can see that Changesets 108,109,110 and 119 where merged from Main to Release1.0 Branch and then “Release_1.0” Branched to “Dev1.0. Step4: You can also see the Merges on a Timeline by clicking on the “Timeline Tracking” button.   Creating New Branches: In TFS 2010, the creation of branches has been streamlined a bit from the process in 2008.  In 2008, creating a new branch was like every other action in the system – changes were pended on the client, and then checked in to the server. Because of this creating new branch in TFS2008 was time-consuming process.  In TFS2010, the step where changes are pended has been bypassed and now performing the branch creation is entirely on the server.  With this approach, the round trip time for downloading a copy of each file on the branch and then uploading each file again has been eliminated.  Note: In TFS2010, the new branch will be created and committed as a single operation on the server. Pending changes will not be created, it doesn’t require a check-in as it will be carried out as a single operation and it’s not possible to cancel.     Manage Branch Permissions: The properties view for branches is also different than that of ordinary folders or file, containing some metadata for the branch, relationship information, and permissions for the branch. In TFS2008, the users who have checkout and Check-in permissions can create a branch. But, In TFS2010 you can control the permissions for Branches using Manage Branch permissions.   Reparent option in TFS2010: In TFS2008, if we have two branches which don’t have parent-child relation and we want perform merge between these two branches then we have to perform baseless merge using tf.exe command line. I have two branches Release_1.0 and Dev1.0_F2 which don’t have any relation between them, that’s why when I click on merge option in Release_1.0, in Target Branch it’s not showing Dev1.0_F2 branch to perform the merges.     Let us see what can we do for this in TFS2010, first perform a TFS baseless merge to establish a relationship between the parent branch and the child branches.  It will only merge the folder, not its contents. TFS baseless merges are performed via the command line using VS2010 command prompt and do the following:   tf merge /baseless <ParentBranch> <childBranch> Check in your pending changes. It will create the link between the branches but the relationships are still not completed.  Now, select the child branch in Source Control Explorer and from the File menu choose Source Control –> Branching and Merging –> Reparent.      In the dialog box,  choose the appropriate branch as the new parent.   Click Reparent and then go to parent branch and click merge. Now, will see that in Target Branch option Dev1.0_F2 branch is added.         Converting Folders to Branches and Branches to Folders: You can convert any Folder as Branch from Context Menu by performing right click on the folderàBranching and MergingàConvert to Branch. In similar way, you can convert the Branches to Folder using Convert to Folder option available in File Menu (FileàSource ControlàBranching and MergingàConvert to Branch). This option is not available in context menu.

    Read the article

  • Using Definition of Done to Drive Agile Maturity

    - by Dylan Smith
    I’ve been an Agile Coach at a lot of different clients over the years, and I want to share an approach I use to help them adopt and mature over time. It’s important to realize that “Agile” is not a black/white yes/no thing. Teams can be varying degrees of agile. I think of this as their agile maturity level. When I coach teams I want them to start out being a little agile, and get more agile as they mature. The approach I teach them is to use the definition of done as a technique to continuously improve their agile maturity over time. We’re probably all familiar with the concept of “Done Done” that represents what *actually* being done a feature means. Not just when a developer says he’s done right after he writes that last line of code that makes the feature kind-of work. Done Done means the coding is done, it’s been tested, installers and deployment packages have been created, user manuals have been updated, architecture docs have been updated, etc. To enable teams to internalize the concept of “Done Done”, they usually get together and come up with their Definition of Done (DoD) that defines all the activities that need to be completed before a feature is considered Done Done. The Done Done technique typically is applied only to features (aka User Stories). What I do is extend this to apply to several concepts such as User Stories, Sprints, Releases (and sometimes Check-Ins). During project kick-off I’ll usually sit down with the team and go through an exercise of creating DoD’s for each of these concepts (Stories/Sprints/Releases). We’ll usually start by just brainstorming a bunch of activities that could end up in these various DoD’s. Here’s some examples: Code Reviews StyleCop FxCop User Manuals Updated Architecture Docs Updated Tested by QA Tested by UAT Installers Created Support Knowledge Base Updated Deployment Instructions (for Ops) written Automated Unit Tests Run Automated Integration Tests Run Then we start by arranging these activities into the place they occur today (e.g. Do you do UAT testing only once per release? every sprint? every feature?). If the team was previously Waterfall most of these activities probably end up in the Release DoD. An extremely mature agile team would probably have most of these activities in the DoD for the User Stories (because an extremely mature agile team will probably do continuous deployment and release every story). So what we need to do as a team, is work to move these activities from their current home (Release DoD) down into the Sprint DoD and eventually into the User Story DoD (and maybe into the lower-level Check-In DoD if we decide to use that). We don’t have to move them all down to User Story immediately, but as a team we figure out what we think we’re capable of moving down to the Sprint cycle, and Story cycle immediately, and that becomes our starting DoD’s. Over time the team makes an effort to continue moving activities down from Release->Sprint->Story as they become more agile and more mature. I try to encourage them to envision a world in which they deploy to production as each User Story is completed. They would need to be updating User Manuals, creating installers, doing UAT testing (typical Release cycle activities) on every single User Story. They may never actually reach that point, but they should envision that, and strive to keep driving the activities down closer to the User Story cycle s they mature. This is a great technique to give a team an easy-to-follow roadmap to mature their agile practices over time. Sure there’s other aspects to maturity outside of this, but it’s a great technique, that’s easy to visualize, to drive agility into the team. Just keep moving those activities (aka “gates”) down the board from Release->Sprint->Story. I’ll try to give an example of what a recent client of mine had for their DoD’s (this is from memory, so probably not 100% accurate): Release Create/Update deployment Instructions For Ops Instructional Videos Updated Run manual regression test suite UAT Testing In this case that meant deploying to an environment shared across the enterprise that mirrored production and asking other business groups to test their own apps to ensure we didn’t break anything outside our system Sprint Deploy to UAT Environment But not necessarily actually request UAT testing occur User Guides updated Sprint Features Video Created In this case we decided to create a video each sprint showing off the progress (video version of Sprint Demo) User Story Manual Test scripts developed and run Tested by BA Deployed in shared QA environment Using automated deployment process Peer Code Review Code Check-In Compiled (warning-free) Passes StyleCop Passes FxCop Create installer packages Run Automated Tests Run Automated Integration Tests PS – One of my clients had a great question when we went through this activity. They said that if a Sprint is by definition done when the end-date rolls around (time-boxed), isn’t a DoD on a sprint meaningless – it’s done on the end-date regardless of whether those other activities are complete or not? My answer is that while that statement is true – the sprint is done regardless when the end date rolls around – if the DoD activities haven’t been completed I would consider the Sprint a failure (similar to not completing what was committed/planned – failure may be too strong a word but you get the idea). In the Retrospective that will become an agenda item to discuss and understand why we weren’t able to complete the activities we agreed would need to be completed each Sprint.

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

< Previous Page | 7 8 9 10 11 12 13  | Next Page >