Search Results

Search found 14816 results on 593 pages for 'logical model'.

Page 430/593 | < Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >

  • Entity Framework, full-text search and temporary tables

    - by markus
    I have a LINQ-2-Entity query builder, nesting different kinds of Where clauses depending on a fairly complex search form. Works great so far. Now I need to use a SQL Server fulltext search index in some of my queries. Is there any chance to add the search term directly to the LINQ query, and have the score available as a selectable property? If not, I could write a stored procedure to load a list of all row IDs matching the full-text search criteria, and then use a LINQ-2-Entity query to load the detail data and evaluate other optional filter criteria in a loop per row. That would be of course a very bad idea performance-wise. Another option would be to use a stored procedure to insert all row IDs matching the full-text search into a temporary table, and then let the LINQ query join the temporary table. Question is: how to join a temporary table in a LINQ query, as it cannot be part of the entity model?

    Read the article

  • django forms doubt

    - by webvulture
    Here, I am a bit confused with forms in Django. I have information for the form(a poll i.e the poll question and options) coming from some db_table - table1 or say class1 in models. Now the vote from this poll is to be captured which is another model say class2. So, I am just getting confused with the whole flow of forms, here i think. How will the data be captured into the class2 table? I was trying something like this. def blah1()     get_data_from_db_table_1()     x = blah2Form()     render_to_response(blah.html,{...})

    Read the article

  • How to create migration in subdirectory with Rails?

    - by Adrian Serafin
    Hi! I'm writing SaaS model application. My application database consist of two logic parts: application tables - such as user, roles... user defined tables (he can generate them from ui level) that can be different for each application instance All tables are created by rails migrations mechanism. I would like to put user defined tables in another directory: db/migrations - application tables db/migrations/custom - tables generated by user so i can do svn:ignore on db/migrations/custom, and when I do updates of my app on clients servers it would only update application tables migrations. Is there any way to achieve this in rails?

    Read the article

  • [Django] Automatically Update Field when a Different Field is Changed

    - by Gordon
    I have a model with a bunch of different fields like first_name, last_name, etc. I also have fields first_name_ud, last_name_ud, etc. that correspond to the last updated date for the related fields (i.e. when first_name is modified, then first_name_ud is set to the current date). Is there a way to make this happen automatically or do I need to check what fields have changed each time I save an object and then update the related "_ud" fields. Thanks a lot!

    Read the article

  • Why does a newly created EF-entity throw an ID is null exception when trying to save?

    - by Richard
    I´m trying out entity framework included in VS2010 but ´ve hit a problem with my database/model generated from the graphical interface. When I do: user = dataset.UserSet.CreateObject(); user.Id = Guid.NewGuid(); dataset.UserSet.AddObject(user); dataset.SaveChanges(); {"Cannot insert the value NULL into column 'Id', table 'BarSoc2.dbo.UserSet'; column does not allow nulls. INSERT fails.\r\nThe statement has been terminated."} The table i´m inserting into looks like so: -- Creating table 'UserSet' CREATE TABLE [dbo].[UserSet] ( [Id] uniqueidentifier NOT NULL, [Name] nvarchar(max) NOT NULL, [Username] nvarchar(max) NOT NULL, [Password] nvarchar(max) NOT NULL ); GO -- Creating primary key on [Id] in table 'UserSet' ALTER TABLE [dbo].[UserSet] ADD CONSTRAINT [PK_UserSet] PRIMARY KEY CLUSTERED ([Id] ASC); GO Am I creating the object in the wrong way or doing something else basic wrong?

    Read the article

  • Django: Prefill a ManytoManyField

    - by Emile Petrone
    I have a ManyToManyField on a settings page that isn't rendering. The data was filled when the user registered, and I am trying to prefill that data when the user tries to change it. Thanks in advance for the help! The HTML: {{form.types.label}} {% if add %} {{form.types}} {% else %} {% for type in form.types.all %} {{type.description}} {% endfor %} {% endif %} The View: @csrf_protect @login_required def edit_host(request, host_id, template_name="host/newhost.html"): host = get_object_or_404(Host, id=host_id) if request.user != host.user: return HttpResponseForbidden() form = HostForm(request.POST) if form.is_valid(): if request.method == 'POST': if form.cleaned_data.get('about') is not None: host.about = form.cleaned_data.get('about') if form.cleaned_data.get('types') is not None: host.types = form.cleaned_data.get('types') host.save() form.save_m2m() return HttpResponseRedirect('/users/%d/' % host.user.id) else: form = HostForm(initial={ "about":host.about, "types":host.types, }) data = { "host":host, "form":form } return render_to_response(template_name, data, context_instance=RequestContext(request)) Form: class HostForm(forms.ModelForm): class Meta: model = Host fields = ('types', 'about', ) types = forms.ModelMultipleChoiceField( widget=forms.CheckboxSelectMultiple, queryset=Type.objects.all(), required=True) about = forms.CharField( widget=forms.Textarea(), required=True) def __init__(self, *args, **kwargs): super(HostForm, self).__init__(*args, **kwargs) self.fields['about'].widget.attrs = { 'placeholder':'Hello!'}

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Doctrine Problem: Couldn't get last insert identifier.

    - by cnkt
    When i try to save data to my model Doctrine throws this exception: Message: Couldn't get last insert identifier. My table setup code is: $this->hasColumn('id', 'integer', 4, array( 'type' => 'integer', 'length' => 4, 'fixed' => false, 'unsigned' => false, 'primary' => true, 'autoincrement' => true, )); Please help. Thanks.

    Read the article

  • How to use acts-as-commentable-with-threading in Rails

    - by alex
    Hi All, I am developing my first rails site (yup, i am a rails idiot). I'm writing a blog, and i got to the comments part. I installed acts-as-commentable-with-threading ( GitHub ), i made and ran the migration like the install instructions said. I have added acts_as_commentable to my Posts model and i have a Comments controller When i add @comment = Comment.build_from(params[:id],1, params[:body] ) I get the error. undefined method `build_from' for # Clearly i am doing something terribly wrong, and i don't really get the example. What should i be feeding to build_from? Can somebody explain this plugin step by step? :) Or is there an easier way to get simple threaded comments?

    Read the article

  • ASP.Net security using Operations Based Security

    - by Josh
    All the security stuff I have worked with in the past in ASP.Net for the most part has been role based. This is easy enough to implement and ASP.Net is geared for this type of security model. However, I am looking for something a little more fine grained than simple role based security. Essentially I want to be able to write code like this: if(SecurityService.CanPerformOperation("SomeUpdateOperation")){ // perform some update logic here } I would also need row level security access like this: if(SecurityService.CanPerformOperation("SomeViewOperation", SomeEntityIdentifier)){ // Allow user to see specific data } Again, fine grained access control. Is there anything like this already built? Some framework that I can drop into ASP.Net and start using, or am I going to have to build this myself?

    Read the article

  • Only show content when certain criteria is met?

    - by Elliot
    I'm wondering if theres a best practice for what I'm trying to accomplish... First we have the model categories, categories, has_many posts. Now lets say, users add posts. Now, I have a page, that I want to display only the current user's posts by category. Lets say we have the following categories: A, B, and C User 1, has posted in Categories A and B. In my view I have something like: @categories.each do |category| category.name @posts.each do |post| if post.category_id==category.id post content here end end end The problem with this, is I'm going to show the empty category, as well as the categories that do have content. Is there a more efficient way of going about this? As I don't want to show the empty categories. Best, Elliot

    Read the article

  • Using Sqlite3 with CakePHP

    - by Dan
    Hello, I'm trying to run Sqlite3 with CakePHP. Yes, i know it's not officially supported, but this post here: http://stackoverflow.com/questions/1021980/cakephp-sqlite says it's possible. I've downloaded the new driver file "dbo_sqlite3.7.php" and put it in "cake/libs/model/datasources/dbo". Now I'm having trouble getting connected to the db. Things I'm confused on: Where should my database.sqlite file be kept What should my config file look like. Should I be referencing the full filename of the driver? Like 'driver' = 'dbo_sqlite3.7.php'? And can I use relative paths to the db file? Is there a difference in sqlite3 files and sqlite2 files by themselves, or is it just the way that you handle the files that makes the difference? Thank you for your help. I'm new to cake and I'm excited about learning more.

    Read the article

  • C# settings using ApplicationSettingsBase - roaming and common

    - by Mark Pim
    I'm using the Windows Forms Application settings architecture (or however you're supposed to refer to it) and am successfully saving user settings to AppData. What I want to do is have some settings common to all users of a particular machine and some settings which roam with users across machines. For example I have some settings relating to a peripheral attached to the computer (model, settings etc.) and some user preferences like user interface colours. The colours preferences should roam with the user, but the peripheral settings should stay on the local computer no matter who's logged on. How can I mark these types of settings so that some get stored in /AppData/... and some in /AppData? Note that I don't want Application level settings - each computer the app will be installed on will have different settings. I'm targetting .Net 3.0 if that makes a difference.

    Read the article

  • Entity Framework and Sql Server view question

    - by Sergio Romero
    Hi to all, For several reasons that I don't have the liberty to talk about, we are defining a view on our Sql Server 2005 database like so: CREATE VIEW [dbo].[MeterProvingStatisticsPoint] AS SELECT CAST(0 AS BIGINT) AS 'RowNumber', CAST(0 AS BIGINT) AS 'ProverTicketId', CAST(0 AS INT) AS 'ReportNumber', GETDATE() AS 'CompletedDateTime', CAST(1.1 AS float) AS 'MeterFactor', CAST(1.1 AS float) AS 'Density', CAST(1.1 AS float) AS 'FlowRate', CAST(1.1 AS float) AS 'Average', CAST(1.1 AS float) AS 'StandardDeviation', CAST(1.1 AS float) AS 'MeanPlus2XStandardDeviation', CAST(1.1 AS float) AS 'MeanMinus2XStandardDeviation' WHERE 0 = 1 The idea is that the Entity Framework will create an entity based on this query, which it does, but it generates it with an error that states the following: "warning 6002: The table/view 'Keystone_Local.dbo.MeterProvingStatisticsPoint' does not have a primary key defined. The key has been inferred and the definition was created as a read-only table/view." And it decides that the CompletedDateTime field will be this entity primary key. We are using EdmGen to generate the model. Is there a way not to have the entity framework include any field of this view as a primary key? Thanks for help.

    Read the article

  • Can't access stored procedure from entities

    - by molgan
    Hello I'm using entity framework that came with 3.5sp1. And in visual studio I have imported function so it shows under "Function Imports" in the "Model Browser". I have assigned all rights to the user that connects to the database. But it doesnt show in the intellisense when I type "_entities.", only my other entities shows there. I opened the designer file and couldnt find it there either..... The stored procedure should return a scalar datetime value, and not an "entity" What might be wrong here? /M

    Read the article

  • Routing Users to single Models with Rails

    - by Eric Koslow
    I'm creating a Rails app for students and high schools and I'm having some trouble with my User.rb. I want to have a user model to be used for logging in, but having that user have many roles. The tricky part is that I want users that have a student role to have_one student page, and those that have a role of principal to have_one high_school page. The students and also nested in the high_school so the entire thing becomes a big mess. So my question(s): How do I limit a user to only creating one student / high school to represent them? Also how would I nest this student pages inside the highschool without screwing up the user system? My environment: Rails3 and Ruby 1.9.2dev Thank you!

    Read the article

  • Localize DisplayNameAttributes in ActionFilter?

    - by boris callens
    Is it possible to access the DisplayNameAttributes that are used on my ViewData.Model so I can Localize them before sending them to the view? Something like this: Public Void OnActionExecuted(ActionExecutedContext: filterContext) { foreach (DisplayNameAttribute attr in filterContext...) { attr.TheValue = AppMessages.GetLocazation(attr.TheValue); } } What I'm missing is how to access the attributes. Is this possible at all? P.S: We're using vb.net at my job and it's infiltrating my brain. So apologies if my C# is a tad off.

    Read the article

  • RAID 50 24Port Fast Writes Slow Reads - Ubuntu

    - by James
    What is going on here?! I am baffled. serveradmin@FILESERVER:/Volumes/MercuryInternal/test$ sudo dd if=/dev/zero of=/Volumes/MercuryInternal/test/test.fs bs=4096k count=10000 10000+0 records in 10000+0 records out 41943040000 bytes (42 GB) copied, 57.0948 s, 735 MB/s serveradmin@FILESERVER:/Volumes/MercuryInternal/test$ sudo dd if=/Volumes/MercuryInternal/test/test.fs of=/dev/null bs=4096k count=10000 10000+0 records in 10000+0 records out 41943040000 bytes (42 GB) copied, 116.189 s, 361 MB/s OF NOTE: My RAID50 is 3 sets of 8 disks. - This might not be the best config for SPEED. OS: Ubuntu 12.04.1 x64 Hardware Raid: RocketRaid 2782 - 24 Port Controller HardDriveType: Seagate Barracuda ES.2 1TB Drivers: v1.1 Open Source Linux Drivers. So 24 x 1TB drives, partitioned using parted. Filesystem is ext4. I/O scheduler WAS noop but have changed it to deadline with no seemingly performance benefit/cost. serveradmin@FILESERVER:/Volumes/MercuryInternal/test$ sudo gdisk -l /dev/sdb GPT fdisk (gdisk) version 0.8.1 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sdb: 41020686336 sectors, 19.1 TiB Logical sector size: 512 bytes Disk identifier (GUID): 95045EC6-6EAF-4072-9969-AC46A32E38C8 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 41020686302 Partitions will be aligned on 2048-sector boundaries Total free space is 5062589 sectors (2.4 GiB) Number Start (sector) End (sector) Size Code Name 1 2048 41015625727 19.1 TiB 0700 primary To me this should be working fine. I can't think of anything that would be causing this other then fundamental driver errors? I can't seem to get much/if any higher then the 361MB a second, is this hitting the "SATA2" link speed, which it shouldn't given it is a PCIe2.0 card. Or maybe some cacheing quirk - I do have Write Back enabled. Does anyone have any suggestions? Tests for me to perform? Or if you require more information, I am happy to provide it! This is a video fileserver for editing machines, so we have a preference for FAST reads over writes. I was just expected more from RAID 50 and 24 drives together... EDIT: (hdparm results) serveradmin@FILESERVER:/Volumes/MercuryInternal$ sudo hdparm -Tt /dev/sdb /dev/sdb: Timing cached reads: 17458 MB in 2.00 seconds = 8735.50 MB/sec Timing buffered disk reads: 884 MB in 3.00 seconds = 294.32 MB/sec EDIT2: (config details) Also, I am using a RAID block size of 256K. I was told a larger block size is better for larger (in my case large video) files. EDIT3: (Bonnie++ Results. Would love some guidance with this!)

    Read the article

  • rails - named scoped help

    - by sameera207
    Hi All, I want to write a named scoped to get a record from its id. Ex: I have a model called Event and its same as doing Event.find(id) (I dont want to use find inside my controller and I want my controller to use a named scoped (for future flexibility)) So I have written a named scoped named_scope :from_id, lambda { |id| {:conditions = ['id= ?', id] } } and I'm calling it from my controller like Event.from_id(id) But my problems is it returns Event object array not only one object Ex: if I want to get event name I have to write event = Event.from_id(id) event[0].name instead I want to write event = Event.from_id(id) event.name Am I doing something wrong here.. thanks in advance cheers sameera

    Read the article

  • Can you handle both json and html datatypes in the same ajax call?

    - by Prabhu
    Is there anyway I can handle both json and html return types when posting jquery ajax: For example, this ajax call expects html back $.ajax({ type: "POST", url: url data: data, dataType: "html", success: function (response) { var $html = "<li class='list-item'>" + response + "</li>"; $('#a').prepend($html); }, error: function (xhr, status, error) { alert(xhr.statusText); } }); but I wanted to modify it so that I can return a json object if there is a model error. so I can do something like this: success: function (response) { if (response.Error){ alert(response.Message); } else { var $html = "<li class='list-item'>" + response + "</li>"; $('#a').prepend($html); } Is this possible?

    Read the article

  • Reverse Engineer a .pyo python file

    - by Brian
    I have 2 .pyo python files that I can convert to .py source files, but they don't compile perfectly as hinted by decompyle's verify. Therefore looking at the source code, I can tell that config.pyo simply had variables in in an array: ADMIN_USERIDS = [116901, 141, 349244, 39, 1159488] I would like to take the original .pyo and disassembly or whatever I need to do inorder to change one of these IDs. Or.... in model.pyo the source indicates a if (productsDeveloperId != self.getUserId()): All I would want to do is hex edit the != to be a == .....Simple with a windows exe program but I can't find a good python disassembler anywhere. Any suggestions are welcomed...I am new to reading bytecode and new to python as well.

    Read the article

  • What kind of knowledge do you need to invent a new programming language?

    - by systempuntoout
    I just finished to read "Coders at works", a brilliant book by Peter Seibel with 15 interviews to some of the most interesting computer programmers alive today. Well, many of the interviewees have (co)invented\implemented a new programming language. Some examples: Joe Armstrong: Inventor of Erlang L. Peter Deutsch: implementer of Smalltalk-80 Brendan Eich: Inventor of JavaScript Dan Ingalls: Smalltalk implementor and designer Simon Peyton Jones: Coinventor of Haskell Guy Steele: Coinventor of Scheme Is out of any doubt that their minds have something special and unreachable, and i'm not crazy to think i will ever able to create a new language; i'm just interested in this topic. So, imagine a funny\grotesque scenario where your crazy boss one day will come to your desk to say "i want a new programming language with my name on it..take the time you need and do it", which is the right approach to studying this fascinating\intimidating\magic topic? What kind of knowledge do you need to model, design and implement a brand new programming language?

    Read the article

  • Techniques for modeling a dynamic dataflow with Java concurrency API

    - by Maian
    Is there an elegant way to model a dynamic dataflow in Java? By dataflow, I mean there are various types of tasks, and these tasks can be "connected" arbitrarily, such that when a task finishes, successor tasks are executed in parallel using the finished tasks output as input, or when multiple tasks finish, their output is aggregated in a successor task (see flow-based programming). By dynamic, I mean that the type and number of successors tasks when a task finishes depends on the output of that finished task, so for example, task A may spawn task B if it has a certain output, but may spawn task C if has a different output. Another way of putting it is that each task (or set of tasks) is responsible for determining what the next tasks are. Sample dataflow for rendering a webpage: I have as task types: file downloader, HTML/CSS renderer, HTML parser/DOM builder, image renderer, JavaScript parser, JavaScript interpreter. File downloader task for HTML file HTML parser/DOM builder task File downloader task for each embedded file/link If image, image renderer If external JavaScript, JavaScript parser JavaScript interpreter Otherwise, just store in some var/field in HTML parser task JavaScript parser for each embedded script JavaScript interpreter Wait for above tasks to finish, then HTML/CSS renderer (obviously not optimal or perfectly correct, but this is simple) I'm not saying the solution needs to be some comprehensive framework (in fact, the closer to the JDK API, the better), and I absolutely don't want something as heavyweight is say Spring Web Flow or some declarative markup or other DSL. To be more specific, I'm trying to think of a good way to model this in Java with Callables, Executors, ExecutorCompletionServices, and perhaps various synchronizer classes (like Semaphore or CountDownLatch). There are a couple use cases and requirements: Don't make any assumptions on what executor(s) the tasks will run on. In fact, to simplify, just assume there's only one executor. It can be a fixed thread pool executor, so a naive implementation can result in deadlocks (e.g. imagine a task that submits another task and then blocks until that subtask is finished, and now imagine several of these tasks using up all the threads). To simplify, assume that the data is not streamed between tasks (task output-succeeding task input) - the finishing task and succeeding task won't exist together, so the input data to the succeeding task will not be changed by the preceeding task (since it's already done). There are only a couple operations that the dataflow "engine" should be able to handle: A mechanism where a task can queue more tasks A mechanism whereby a successor task is not queued until all the required input tasks are finished A mechanism whereby the main thread (or other threads not managed by the executor) blocks until the flow is finished A mechanism whereby the main thread (or other threads not managed by the executor) blocks until certain tasks have finished Since the dataflow is dynamic (depends on input/state of the task), the activation of these mechanisms should occur within the task code, e.g. the code in a Callable is itself responsible for queueing more Callables. The dataflow "internals" should not be exposed to the tasks (Callables) themselves - only the operations listed above should be available to the task. Note that the type of the data is not necessarily the same for all tasks, e.g. a file download task may accept a File as input but will output a String. If a task throws an uncaught exception (indicating some fatal error requiring all dataflow processing to stop), it must propagate up to the thread that initiated the dataflow as quickly as possible and cancel all tasks (or something fancier like a fatal error handler). Tasks should be launched as soon as possible. This along with the previous requirement should preclude simple Future polling + Thread.sleep(). As a bonus, I would like to dataflow engine itself to perform some action (like logging) every time task is finished or when no has finished in X time since last task has finished. Something like: ExecutorCompletionService<T> ecs; while (hasTasks()) { Future<T> future = ecs.poll(1 minute); some_action_like_logging(); if (future != null) { future.get() ... } ... } Are there straightforward ways to do all this with Java concurrency API? Or if it's going to complex no matter what with what's available in the JDK, is there a lightweight library that satisfies the requirements? I already have a partial solution that fits my particular use case (it cheats in a way, since I'm using two executors, and just so you know, it's not related at all to the web browser example I gave above), but I'd like to see a more general purpose and elegant solution.

    Read the article

  • Can I pass data via JQUERY to ASP.NET MVC controller action and have a view rendered in new browser

    - by Simon Lomax
    Hi, Can anyone advise if its possible to pass data via JQUERY to an ASP.NET MVC controller action and have a view rendered in new browser tab based on the model data passed to the action method. My scenario is that I have a Jqgrid populated with product info on a page. The user would tick the items in the grid that they would like a label produced for. After they've made their slection they would click a button and I would like (if possible) to render a view of that contains a label for each selected item and have the view render in a new browser tab. All the code to allow the selections and post the relevant data back to the action method is all working fine and I know its easy to use the Jquery $(selector).load() command to populate an element on the current page with the HTML returned from the action. But is it possible to populate an element on a page in a new browser tab. If it is how would I go about it? Hope this make sense.

    Read the article

  • Pass a variable to a Symfony Form

    - by Juan Besa
    Hi, I am building a web application using Symfony 1.4 and Doctrine for a school and I want to make a very simple form to add a course to a student. The main problem I have is that in the drop down list I only want to show the courses in which the student is currently not enrolled. I already have a function in the model (in Student.class.php) which returns all the courses in which the student is not enrolled but the problem is I don't know how to pass the student to the configure() of the form. I have tried several options like passing it with the constructor of the form to a global variable or a special set method but none of them have worked. Is there any form to pass the student to the configure() method? Thanks!

    Read the article

< Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >