Search Results

Search found 15903 results on 637 pages for 'mapping model'.

Page 605/637 | < Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >

  • Any way to speed up this hierarchical query?

    - by RenderIn
    I've got a serious performance problem with a hierarchical query that I can't seem to fix. I am modeling several organization charts in my database, each representing a virtual organization within our company. For example, we have several temporary committees that are created from time to time and there may be a Committee Organizer role at the top of this virtual hierarchy, with several people assigned to the Committee Member role beneath the organizer. Some of our virtual organizations have many levels and several branches at each level. I have a single table in which I represent all the role assignments. i.e. a ROLE_ID column and a PARENT_ROLE_ID column which is a foreign key to the ROLE_ID column. For each assignment we also store as a column the location in the company where this person has the assignment. For example, the Committee Organizer would have a company-level/ CEO assignment, while the committee members would have department-level assignments such as ACCOUNTING, MARKETING, etc. So to model the organizer/member relationship for two individuals we would have: ROLE_ID = 4 PARENT_ROLE_ID = NULL EMPLOYEE_NUMBER = 213423 COMPANY_LOCATION = CEO ROLE_ID = 5 PARENT_ROLE_ID = 4 EMPLOYEE_NUMBER = 838221 COMPANY_LOCATION = ACCOUNTING Here's where things get tricky. I have an application that every person in the organization can log in to. When they log in they should be able to view all the virtual organizations in our company. e.g. the committee members should be able to see the committee organizer and vice-versa. However, only the committee organizer should be able to edit the committee members. The difficulty is in determining whether an individual (who can have multiple role assignments) has edit access for each other assignment. While this seems simple in the example, consider a virtual organization in which we have President at the top, 5 departments directly beneath him, 2 subdepartments below each department. We only want people in the Accounting department to be able to edit individuals in the subdepartments belonging to the Accounting department. They should not have edit access to anybody in the Marketing department or its subdepartments. To determine edit access when a user views a virtual organization in our company I run a query that executes two inline views: A) Hierarchically query for all assignments in this virtual organization and using SYS_CONNECT_BY_PATH to store the entire path to each user/role/company_location and B) Hierarchically retrieve all the assignments the individual logged in has and using the SYS_CONNECT_BY_PATH to store the entire path to each of these assignments. The result of the query is all the records from A) plus a boolean determined by joining with B) which flags whether the logged in user has edit access for each record. Indexes don't seem to be helping... it simply appears that there is too much processing going on to separate all the records and then determine edit access. One issue is that I can't store the SYS_CONNECT_BY_PATH and index it... determining whether an individual record has edit access consists of comparing if: test_record_sys_path LIKE individual_record_sys_path || '%' Is a materialized view the answer?

    Read the article

  • R glm standard error estimate differences to SAS PROC GENMOD

    - by Michelle
    I am converting a SAS PROC GENMOD example into R, using glm in R. The SAS code was: proc genmod data=data0 namelen=30; model boxcoxy=boxcoxxy ~ AGEGRP4 + AGEGRP5 + AGEGRP6 + AGEGRP7 + AGEGRP8 + RACE1 + RACE3 + WEEKEND + SEQ/dist=normal; FREQ REPLICATE_VAR; run; My R code is: parmsg2 <- glm(boxcoxxy ~ AGEGRP4 + AGEGRP5 + AGEGRP6 + AGEGRP7 + AGEGRP8 + RACE1 + RACE3 + WEEKEND + SEQ , data=data0, family=gaussian, weights = REPLICATE_VAR) When I use summary(parmsg2) I get the same coefficient estimates as in SAS, but my standard errors are wildly different. The summary output from SAS is: Name df Estimate StdErr LowerWaldCL UpperWaldCL ChiSq ProbChiSq Intercept 1 6.5007436 .00078884 6.4991975 6.5022897 67911982 0 agegrp4 1 .64607262 .00105425 .64400633 .64813891 375556.79 0 agegrp5 1 .4191395 .00089722 .41738099 .42089802 218233.76 0 agegrp6 1 -.22518765 .00083118 -.22681672 -.22355857 73401.113 0 agegrp7 1 -1.7445189 .00087569 -1.7462352 -1.7428026 3968762.2 0 agegrp8 1 -2.2908855 .00109766 -2.2930369 -2.2887342 4355849.4 0 race1 1 -.13454883 .00080672 -.13612997 -.13296769 27817.29 0 race3 1 -.20607036 .00070966 -.20746127 -.20467944 84319.131 0 weekend 1 .0327884 .00044731 .0319117 .03366511 5373.1931 0 seq2 1 -.47509583 .00047337 -.47602363 -.47416804 1007291.3 0 Scale 1 2.9328613 .00015586 2.9325559 2.9331668 -127 The summary output from R is: Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 6.50074 0.10354 62.785 < 2e-16 AGEGRP4 0.64607 0.13838 4.669 3.07e-06 AGEGRP5 0.41914 0.11776 3.559 0.000374 AGEGRP6 -0.22519 0.10910 -2.064 0.039031 AGEGRP7 -1.74452 0.11494 -15.178 < 2e-16 AGEGRP8 -2.29089 0.14407 -15.901 < 2e-16 RACE1 -0.13455 0.10589 -1.271 0.203865 RACE3 -0.20607 0.09315 -2.212 0.026967 WEEKEND 0.03279 0.05871 0.558 0.576535 SEQ -0.47510 0.06213 -7.646 2.25e-14 The importance of the difference in the standard errors is that the SAS coefficients are all statistically significant, but the RACE1 and WEEKEND coefficients in the R output are not. I have found a formula to calculate the Wald confidence intervals in R, but this is pointless given the difference in the standard errors, as I will not get the same results. Apparently SAS uses a ridge-stabilized Newton-Raphson algorithm for its estimates, which are ML. The information I read about the glm function in R is that the results should be equivalent to ML. What can I do to change my estimation procedure in R so that I get the equivalent coefficents and standard error estimates that were produced in SAS? To update, thanks to Spacedman's answer, I used weights because the data are from individuals in a dietary survey, and REPLICATE_VAR is a balanced repeated replication weight, that is an integer (and quite large, in the order of 1000s or 10000s). The website that describes the weight is here. I don't know why the FREQ rather than the WEIGHT command was used in SAS. I will now test by expanding the number of observations using REPLICATE_VAR and rerunning the analysis.

    Read the article

  • No exception, no error, still i dont recieve the json object from my http post

    - by user2978538
    My source code: final Thread t = new Thread() { public void run() { Looper.prepare(); HttpClient client = new DefaultHttpClient(); HttpConnectionParams.setConnectionTimeout(client.getParams(), 10000); HttpResponse response; JSONObject obj = new JSONObject(); try { HttpPost post = new HttpPost("http://pc.dyndns-office.com/mobile.asp"); obj.put("Model", ReadIn1); obj.put("Product", ReadIn2); obj.put("Manufacturer", ReadIn3); obj.put("RELEASE", ReadIn4); obj.put("SERIAL", ReadIn5); obj.put("ID", ReadIn6); obj.put("ANDROID_ID", ReadIn7); obj.put("Language", ReadIn8); obj.put("BOARD", ReadIn9); obj.put("BOOTLOADER", ReadIn10); obj.put("BRAND", ReadIn11); obj.put("CPU_API", ReadIn12); obj.put("DISPLAY", ReadIn13); obj.put("FINGERPRINT", ReadIn14); obj.put("HARDWARE", ReadIn15); obj.put("UUID", ReadIn16); StringEntity se = new StringEntity(obj.toString()); se.setContentType(new BasicHeader(HTTP.CONTENT_TYPE, "application/json")); post.setEntity(se); post.setHeader("host", "http://pc.dyndns-office.com/mobile.asp"); response = client.execute(post); if (response != null) { InputStream in = response.getEntity().getContent(); } } catch (Exception e) { e.printStackTrace(); } Looper.loop(); } }; t.start(); } } i want to send an Json object to a Website. As far as I can see, I set the header, but still I get this exception, can someone help me? (I'm using Android-Studio) __ Edit: i don't get any exceptions anymore, but still i do not receive the json packet. When i manually call the website i get a log file entry. Does anyone know, what's wrong? Edit2: When i debug i get as response "HTTP/1.1 400 bad request" i'm sure its not an permission problem. Any ideas?

    Read the article

  • Initialization of ComboBox in datagrid, Silverlight 4.0

    - by Budda
    I have datagrid with list of MyPlayer objects linked to ItemsSource, there are ComboBoxes inside of grid that are linked to a list of inner object, and binding works correctly: when I select one of the item then its value is pushed to data model and appropriately updated in other places, where it is used. The only problem: initial selections are not displayed in my ComboBoxes. I don't know why..? Instance of the ViewModel is assigned to view DataContext. Here is grid with ComboBoxes (grid is binded to the SquadPlayers property of ViewModel): <data:DataGrid ="True" AutoGenerateColumns="False" ItemsSource="{Binding SquadPlayers}"> <data:DataGrid.Columns> <data:DataGridTemplateColumn Header="Rig." Width="50"> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <ComboBox SelectedItem="{Binding Rigid, Mode=TwoWay}" ItemsSource="{Binding IntLevels, Mode=TwoWay}"/> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> </data:DataGrid.Columns> </data:DataGrid> Here is ViewModel class ('_model_DataReceivedEvent' method is called asynchronously, when data are received from server): public class SquadViewModel : ViewModelBase<SquadModel> { public SquadViewModel() { SquadPlayers = new ObservableCollection<SquadPlayer>(); } private void _model_DataReceivedEvent(List<SostavPlayerData> allReadyPlayers) { TeamTask task = new TeamTask { Rigid = 1 }; foreach (SostavPlayerData spd in allReadyPlayers) { SquadPlayer sp = new SquadPlayer(spd, task); SquadPlayers.Add(sp); } RaisePropertyChanged("SquadPlayers"); } And here is SquadPlayer class (it's objects are binded to the grid rows): public class SquadPlayer : INotifyPropertyChanged { public SquadPlayer(SostavPlayerData spd) { _spd = spd; Rigid = 2; } public event PropertyChangedEventHandler PropertyChanged; private int _rigid; public int Rigid { get { return _rigid; } set { _rigid = value; if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs("Rigid")); } } } private readonly ObservableCollection<int> _statIntLevels = new ObservableCollection<int> { 1, 2, 3, 4, 5 }; public ObservableCollection<int> IntLevels { get { return _statIntLevels; } } It is expected to have all "Rigid" comboboxes set to "2" value, but they are not selected (items are in the drop-down list, and if any value is selected it is going to ViewModel). What is wrong with this example? Any help will be welcome. Thanks.

    Read the article

  • How do I add the j2ee.jar to a Java2WSDL ant script programmatically?

    - by Marcus
    I am using IBM's Rational Application Developer. I have an ant script that contains the Java2WSDL task. When I run it via IBM, it gives compiler errors unless I include the j2ee.jar file in the classpath via the run tool (it does not pick up the jar files in the classpath in the script). However, I need to be able to call this script programmatically, and it is giving me this error: "java.lang.NoClassDefFoundError: org.eclipse.core.runtime.CoreException" I'm not sure which jars need to be added or where? Since a simple echo script runs, I assume that it is the j2ee.jar or another ant jar that needs to be added. I've added it to the project's buildpath, but that doesn't help. (I also have ant.jar, wsanttasks.jar, all the ant jars from the plugin, tools.jar, remoteAnt.jar, and the swt - all which are included in the buildpath when you run the script by itself.) Script: <?xml version="1.0" encoding="UTF-8"?> <project default="build" basedir="."> <path id="lib.path"> <fileset dir="C:\Program Files\IBM\WebSphere\AppServer\lib" includes="*.jar"/> <!-- Adding these does not help. <fileset dir="C:\Program Files\IBM\SDP70Shared\plugins\org.apache.ant_1.6.5\lib" includes="*.jar"/> <fileset dir="C:\Program Files\IBM\SDP70\jdk\lib" includes="*.jar"/> <fileset dir="C:\Program Files\IBM\SDP70\configuration\org.eclipse.osgi\bundles\1139\1\.cp\lib" includes="*.jar"/> <fileset dir="C:\Program Files\IBM\SDP70Shared\plugins" includes="*.jar"/> --> </path> <taskdef name="java2wsdl" classname="com.ibm.websphere.ant.tasks.Java2WSDL"> <classpath refid="lib.path"/> </taskdef> <target name="build"> <echo message="Beginning build"/> <javac srcdir="C:\J2W_Test\Java2Wsdl_Example" destdir="C:\J2W_Test\Java2Wsdl_Example"> <classpath refid="lib.path"/> <include name="WSExample.java"/> </javac> <echo message="Set up javac"/> <echo message="Running java2wsdl"/> <java2wsdl output="C:\J2W_Test\Java2Wsdl_Example\example\META-INF\wsdl\WSExample.wsdl" classpath="C:\J2W_Test\Java2Wsdl_Example" className= "example.WSExample" namespace="http://example" namespaceImpl="http://example" location="http://localhost:9080/example/services/WSExample" style="document" use="literal"> <mapping namespace="http://example" package="example"/> </java2wsdl> <echo message="Complete"/> </target> </project> Code: File buildFile = new File("build.xml"); Project p = new Project(); p.setUserProperty("ant.file", buildFile.getAbsolutePath()); DefaultLogger consoleLogger = new DefaultLogger(); consoleLogger.setErrorPrintStream(System.err); consoleLogger.setOutputPrintStream(System.out); consoleLogger.setMessageOutputLevel(Project.MSG_INFO); p.addBuildListener(consoleLogger); try { p.fireBuildStarted(); p.init(); ProjectHelper helper = ProjectHelper.getProjectHelper(); p.addReference("ant.projectHelper", helper); helper.parse(p, buildFile); p.executeTarget(p.getDefaultTarget()); p.fireBuildFinished(null); } catch (BuildException e) { p.fireBuildFinished(e); } Error: [java2wsdl] java.lang.NoClassDefFoundError: org.eclipse.core.runtime.CoreException [java2wsdl] at java.lang.J9VMInternals.verifyImpl(Native Method) [java2wsdl] at java.lang.J9VMInternals.verify(J9VMInternals.java:68) [java2wsdl] at java.lang.J9VMInternals.initialize(J9VMInternals.java:129) [java2wsdl] at com.ibm.ws.webservices.multiprotocol.discovery.ServiceProviderManager.getDiscoveredServiceProviders(ServiceProviderManager.java:378) [java2wsdl] at com.ibm.ws.webservices.multiprotocol.discovery.ServiceProviderManager.getAllServiceProviders(ServiceProviderManager.java:214) [java2wsdl] at com.ibm.ws.webservices.wsdl.fromJava.Emitter.initPluggableBindings(Emitter.java:2704) [java2wsdl] at com.ibm.ws.webservices.wsdl.fromJava.Emitter.<init>(Emitter.java:389) [java2wsdl] at com.ibm.ws.webservices.tools.ant.Java2WSDL.execute(Java2WSDL.java:122) [java2wsdl] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:275) [java2wsdl] at org.apache.tools.ant.Task.perform(Task.java:364) [java2wsdl] at org.apache.tools.ant.Target.execute(Target.java:341) [java2wsdl] at org.apache.tools.ant.Target.performTasks(Target.java:369) [java2wsdl] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1216) [java2wsdl] at org.apache.tools.ant.Project.executeTarget(Project.java:1185) [java2wsdl] at att.ant.RunAnt.main(RunAnt.java:32)

    Read the article

  • java.util.Map with HtmlDataTable

    - by gerry
    Hi, I'm developing an application on GlassFish v3 which uses Suns-RI of JavaEE6 and JSF2.0, etc. And the bad thing is, that no changes/switches away from Suns RI can be made (to use MyFaces or something like that). Now, the problem is, that I want to build HtmlDatatable by hand ( in Java code). The datatable should represent a java.util.Map where the first column should display the key and the second the values of the map. I've build successfully a PanelGrid from a java.util.List and used every time the "setExpressionValue" methods of UIComponent to bind the UI to the underlying List. But now, this doesn't work with the Map. Here is a snippet of my code: public HtmlDataTable getEntityDetailsDataTable() { ... Application app = FacesContext.getCurrentInstance().getApplication(); HtmlDataTable component = (HtmlDataTable)app.createComponent(HtmlDataTable.COMPONENT_TYPE); component.setValueExpression("value", ExpressionUtil.createValueExpression("#{entityTree.entity."+fieldName+".entrySet()}", Map.class)); component.setVar("param"); UIColumn column = new UIColumn(); UIOutput label1 = DynamicHtmlComponentCreator.createHtmlOutputText("#{param[key]}", String.class); column.getChildren().add(label1); UIOutput label2 = DynamicHtmlComponentCreator.createHtmlOutputText("#{param[value]}", String.class); column.getChildren().add(label2); component.getChildren().add(column); ... return component; } component.getChildren().add(column); ... return component; } So, further the problem is, that this code only prints out the content of the Map, on another page I need the values displayed in HtmlInputText elements and the whole map updated if the user clicks a i.e. "Save" button. So, further the problem is, that this code only prints out the content of the Map, on another page I need the values displayed in HtmlInputText elements and the whole map updated if the user clicks a i.e. "Save" button. If there is a workaround, to represent the Map as to Lists...please help me, because for this (map as 2 lists) I've no idea how the underlying map/database model can be updated again. Hopefully, someone can help me....

    Read the article

  • Rails Google Maps integration Javascript problem

    - by JZ
    I'm working on Rails 3.0.0.beta2, following Advanced Rails Recipes "Recipe #32, Mark locations on a Google Map" and I hit a road block: I do not see a google map. My @adds view uses @adds.to_json to connect the google maps api with my model. My database contains "latitude" "longitude", as floating points. And the entire project can be accessed at github. Can you see where I'm not connecting the to_json output with the javascript correctly? Can you see other glairing errors in my javascript? Thanks in advance! My application.js file: function initialize() { if (GBrowserIsCompatible() && typeof adds != 'undefined') { var map = new GMap2(document.getElementById("map")); map.setCenter(new GLatLng(37.4419, -122.1419), 13); map.addControl(new GLargeMapControl()); function createMarker(latlng, add) { var marker = new GMarker(latlng); var html="<strong>"+add.first_name+"</strong><br />"+add.address; GEvent.addListener(marker,"click", function() { map.openInfoWindowHtml(latlng, html); }); return marker; } var bounds = new GLatLngBounds; for (var i = 0; i < adds.length; i++) { var latlng=new GLatLng(adds[i].latitude,adds[i].longitude) bounds.extend(latlng); map.addOverlay(createMarker(latlng, adds[i])); } map.setCenter(bounds.getCenter(),map.getBoundsZoomLevel(bounds)); } } window.onload=initialize; window.onunload=GUnload; Layouts/adds.html.erb: <script src="http://maps.google.com/maps?file=api&amp;v=2&amp;sensor=true_or_false&amp;key=ABQIAAAAeH4ThRuftWNHlwYdvcK1QBTJQa0g3IQ9GZqIMmInSLzwtGDKaBQvZChl_y5OHf0juslJRNx7TbxK3Q" type="text/javascript"></script> <% if @adds -%> <script type="text/javascript"> var maps = <%= @adds.to_json %>; </script> <% end -%>

    Read the article

  • Make errors when compiling HPL-2.1 on MOSIX-clustered Debian server

    - by tlake
    I'm trying to compile HPL 2.1 on a MOSIX-clustered Debian server, but the make process terminates with errors as seen below. Included are my makefile and two versions of output: one from a standard execution, and one from an execution run with the debug flag. Any help and guidance would be very much appreciated! The makefile: # ---------------------------------------------------------------------- # - shell -------------------------------------------------------------- # ---------------------------------------------------------------------- # SHELL = /bin/bash # CD = cd CP = cp LN_S = ln -s MKDIR = mkdir RM = /bin/rm -f TOUCH = touch # # ---------------------------------------------------------------------- # - Platform identifier ------------------------------------------------ # ---------------------------------------------------------------------- # ARCH = Linux_PII_CBLAS # # ---------------------------------------------------------------------- # - HPL Directory Structure / HPL library ------------------------------ # ---------------------------------------------------------------------- # TOPdir = $(HOME)/hpl-2.1 INCdir = $(TOPdir)/include BINdir = $(TOPdir)/bin/$(ARCH) LIBdir = $(TOPdir)/lib/$(ARCH) # HPLlib = $(LIBdir)/libhpl.a # # ---------------------------------------------------------------------- # - Message Passing library (MPI) -------------------------------------- # ---------------------------------------------------------------------- # MPinc tells the C compiler where to find the Message Passing library # header files, MPlib is defined to be the name of the library to be # used. The variable MPdir is only used for defining MPinc and MPlib. # MPdir = /usr/local MPinc = -I$(MPdir)/include MPlib = $(MPdir)/lib/libmpi.so # # ---------------------------------------------------------------------- # - Linear Algebra library (BLAS or VSIPL) ----------------------------- # ---------------------------------------------------------------------- # LAinc tells the C compiler where to find the Linear Algebra library # header files, LAlib is defined to be the name of the library to be # used. The variable LAdir is only used for defining LAinc and LAlib. # LAdir = $(HOME)/CBLAS/lib LAinc = LAlib = $(LAdir)/cblas_LINUX.a # # ---------------------------------------------------------------------- # - F77 / C interface -------------------------------------------------- # ---------------------------------------------------------------------- # You can skip this section if and only if you are not planning to use # a BLAS library featuring a Fortran 77 interface. Otherwise, it is # necessary to fill out the F2CDEFS variable with the appropriate # options. **One and only one** option should be chosen in **each** of # the 3 following categories: # # 1) name space (How C calls a Fortran 77 routine) # # -DAdd_ : all lower case and a suffixed underscore (Suns, # Intel, ...), [default] # -DNoChange : all lower case (IBM RS6000), # -DUpCase : all upper case (Cray), # -DAdd__ : the FORTRAN compiler in use is f2c. # # 2) C and Fortran 77 integer mapping # # -DF77_INTEGER=int : Fortran 77 INTEGER is a C int, [default] # -DF77_INTEGER=long : Fortran 77 INTEGER is a C long, # -DF77_INTEGER=short : Fortran 77 INTEGER is a C short. # # 3) Fortran 77 string handling # # -DStringSunStyle : The string address is passed at the string loca- # tion on the stack, and the string length is then # passed as an F77_INTEGER after all explicit # stack arguments, [default] # -DStringStructPtr : The address of a structure is passed by a # Fortran 77 string, and the structure is of the # form: struct {char *cp; F77_INTEGER len;}, # -DStringStructVal : A structure is passed by value for each Fortran # 77 string, and the structure is of the form: # struct {char *cp; F77_INTEGER len;}, # -DStringCrayStyle : Special option for Cray machines, which uses # Cray fcd (fortran character descriptor) for # interoperation. # F2CDEFS = # # ---------------------------------------------------------------------- # - HPL includes / libraries / specifics ------------------------------- # ---------------------------------------------------------------------- # HPL_INCLUDES = -I$(INCdir) -I$(INCdir)/$(ARCH) $(LAinc) $(MPinc) HPL_LIBS = $(HPLlib) $(LAlib) $(MPlib) # # - Compile time options ----------------------------------------------- # # -DHPL_COPY_L force the copy of the panel L before bcast; # -DHPL_CALL_CBLAS call the cblas interface; # -DHPL_CALL_VSIPL call the vsip library; # -DHPL_DETAILED_TIMING enable detailed timers; # # By default HPL will: # *) not copy L before broadcast, # *) call the BLAS Fortran 77 interface, # *) not display detailed timing information. # HPL_OPTS = -DHPL_CALL_CBLAS # # ---------------------------------------------------------------------- # HPL_DEFS = $(F2CDEFS) $(HPL_OPTS) $(HPL_INCLUDES) # # ---------------------------------------------------------------------- # - Compilers / linkers - Optimization flags --------------------------- # ---------------------------------------------------------------------- # CC = /usr/bin/gcc CCNOOPT = $(HPL_DEFS) CCFLAGS = $(HPL_DEFS) -fomit-frame-pointer -O3 -funroll-loops # # On some platforms, it is necessary to use the Fortran linker to find # the Fortran internals used in the BLAS library. # LINKER = ~/BLAS LINKFLAGS = $(CCFLAGS) # ARCHIVER = ar ARFLAGS = r RANLIB = echo # # ---------------------------------------------------------------------- Make output: ~/BLAS -DHPL_CALL_CBLAS -I/homes/laket/hpl-2.1/include -I/homes/laket/hpl-2.1/include/Linux_PII_CBLAS -I/usr/local/include -fomit-frame-pointer -O3 -funroll-loops -o /homes/laket/hpl-2.1/bin/Linux_PII_CBLAS/xhpl HPL_pddriver.o HPL_pdinfo.o HPL_pdtest.o /homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a /homes/laket/CBLAS/lib/cblas_LINUX.a /usr/local/lib/libmpi.so /bin/bash: /homes/laket/BLAS: Is a directory make[2]: *** [dexe.grd] Error 126 make[2]: Target `all' not remade because of errors. make[2]: Leaving directory `/homes/laket/hpl-2.1/testing/ptest/Linux_PII_CBLAS' make[1]: *** [build_tst] Error 2 make[1]: Leaving directory `/homes/laket/hpl-2.1' make: *** [build] Error 2 make: Target `all' not remade because of errors. Make -d output: Considering target file `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a'. Looking for an implicit rule for `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a'. Trying pattern rule with stem `libhpl.a'. Trying implicit prerequisite `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a,v'. Trying pattern rule with stem `libhpl.a'. Trying implicit prerequisite `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/RCS/libhpl.a,v'. Trying pattern rule with stem `libhpl.a'. Trying implicit prerequisite `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/RCS/libhpl.a'. Trying pattern rule with stem `libhpl.a'. Trying implicit prerequisite `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/s.libhpl.a'. Trying pattern rule with stem `libhpl.a'. Trying implicit prerequisite `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/SCCS/s.libhpl.a'. No implicit rule found for `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a'. Finished prerequisites of target file `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a'. No need to remake target `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a'. Finished prerequisites of target file `dexe.grd'. Must remake target `dexe.grd'. ~/BLAS -DHPL_CALL_CBLAS -I/homes/laket/hpl-2.1/include -I/homes/laket/hpl-2.1/include/Linux_PII_CBLAS -I/usr/local/include -fomit-frame-pointer -O3 -funroll-loops -o /homes/laket/hpl-2.1/bin/Linux_PII_CBLAS/xhpl HPL_pddriver.o HPL_pdinfo.o HPL_pdtest.o /homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a /homes/laket/CBLAS/lib/cblas_LINUX.a /usr/local/lib/libmpi.so Putting child 0x0129a2c0 (dexe.grd) PID 24853 on the chain. Live child 0x0129a2c0 (dexe.grd) PID 24853 /bin/bash: /homes/laket/BLAS: Is a directory make[2]: Reaping losing child 0x0129a2c0 PID 24853 *** [dexe.grd] Error 126 Removing child 0x0129a2c0 PID 24853 from chain. Failed to remake target file `dexe.grd'. Finished prerequisites of target file `dexe'. Giving up on target file `dexe'. Finished prerequisites of target file `all'. Giving up on target file `all'. make[2]: Target `all' not remade because of errors. make[2]: Leaving directory `/homes/laket/hpl-2.1/testing/ptest/Linux_PII_CBLAS' Reaping losing child 0x010ce900 PID 24841 make[1]: *** [build_tst] Error 2 Removing child 0x010ce900 PID 24841 from chain. Failed to remake target file `build_tst'. make[1]: Leaving directory `/homes/laket/hpl-2.1' Reaping losing child 0x00d91ae0 PID 24774 make: *** [build] Error 2 Removing child 0x00d91ae0 PID 24774 from chain. Failed to remake target file `build'. Finished prerequisites of target file `install'. make: Target `all' not remade because of errors. Giving up on target file `install'. Finished prerequisites of target file `all'. Giving up on target file `all'. Thanks!

    Read the article

  • Setting default radio button on edit

    - by DTown
    So I'm trying to setup scaffolding to use radio buttons for the format button. It definitely works to add a new and edit. The problem is when I go to edit an entry the correct radio button isn't selected by default. <% form_for(@cinema) do |f| %> <%= f.error_messages %> <p> <%= f.label :title %><br /> <%= f.text_field :title %> </p> <p> <%= f.label :director %><br /> <%= f.text_field :director %> </p> <p> <%= f.label :release_date %><br /> <%= f.date_select :release_date, :start_year => 1900, :end_year => 2010 %> </p> <p> <%= f.label :running_time %><br /> <%= f.text_field :running_time %> </p> <p>Blockquote <%= f.label :format %><br /> <%= f.radio_button :format, "black & white" %> <%= label :format_bw, "Black & White" %> <%= f.radio_button :format, "color" %> <%= label :format_color, "Color" %> </p> <p> <%= f.submit 'Create' %> </p> <% end % Controller def edit @cinema = Cinema.find(params[:id]) end Model class Cinema < ActiveRecord::Base validates_presence_of :title, :on => :create validates_presence_of :title, :on => :update # validates_presence_of :director, :on => :create validates_presence_of :director, :on => :update # validates_presence_of :release_date, :on => :create validates_presence_of :release_date, :on => :update # validates_presence_of :format, :on => :create validates_presence_of :format, :on => :update # validates_presence_of :running_time, :on => :create validates_presence_of :running_time, :on => :update validates_numericality_of :running_time, :on => :create, :on => :update, :less_than_or_equal_to => 300, :greater_than => 0 end

    Read the article

  • Good working habits to observe in project development?

    - by Will Marcouiller
    As my development experience grows, I see fit to stick to best practices from here and there to build somehow my own working practices while observing the conventions, etc. I'm currently working on a project which my goals is to graduate the security access model from an environment's Active Directory to another environment's automatically. I don't know for any of you, but as far as I'm concerned, I meet some real difficulties sticking to only one way, then develop. I mean, I learn something new everyday while visiting SO, and recently wanted to get acquainted with generics. On the other hand, I better know the Façade pattern which proved to be very practical in transactional programming in process systems. This seems to be less practical for desktop application as there are plenty of variables to consider in a desktop application that you don't have to care in transactional programming, as you're playing only with information data. As for my current project, I have: Groups; Organizational Units; Users. Which are all considered an entry in the Active Directory. This points out to be a good candidate for generics, as also approached this way by Bart de Smett's Linq to AD on CodePlex. He has a DirectorySource<T>, and to manage let's say groups, then he instantiate a source with the proper type: var groups = new DirectorySource<Group>(); This seems to be very a good way of doing. Despite, I seem to go from one pattern to another and I don't seem to be able to strictly stick to one. While I'm aware that one must not stay with only one way of doing, since each pattern statisfies certain advantages, while also illustrating disadvantages under some usage conditions, I seem to want to develop with both patterns having a singleton Façade class with the underlying factories which represent the sub systems: GroupsFactory; UsersFactory; OrganizationalUnitsFactory. Each of the factories offers the possible operations for their respective entity (group, user, OU). To make a very long story short, I often have plenty of ideas while developping and this causes me some trouble, as I go from an idea to another feeling completely lost after a while. Yet I understand the advantages and disavantages, I have no trouble choosing from one pattern to another depending on the situation. Nevertheless, when it comes to programming itself, if I'm not part of a team, I feel sometimes like I can't do anything good. That is, because I can't stand not doing something "perfect" the first time. The role I play within the project is both: the project manager and the programmer. I am more comfortable in the project manager role, architectural role, analytical role than the developer's. Has any of you some good habbits to observe in project development? Thanks to you all! =)

    Read the article

  • When spliting MP4s with ffmpeg how do I include metadata?

    - by Josh
    I have a few MP4s that i want to upload to my flickr account but they have a maximum size of 500mb as mine is only about 550 i was planing to simply split them in half then upload them, but i want to make sure all the meta data is included but it does not seem to be. I have tried each of the following with no luck, (at the end of this post i have the original and the new ffprobe outputs): ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -map_metadata 0:0 SANY0069A.MP4 ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -map_meta_data SANY0069.MP4:SANY0069A.MP4 SANY0069A.MP4 with the this one I manually produced the individual meta tags that i took from this command ffmpeg -i SANY0069A.MP4 -f ffmetadata meta.txt ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -metadata major_brand="mp42" -metadata minor_version="1" -metadata compatible_brands="mp42avc1" -metadata creation_time="2012-09-29 09:05:50" -metadata comment="SANYO DIGITAL CAMERA CA9" -metadata comment-eng="SANYO DIGITAL CAMERA CA9" SANY0069A.MP4 using the output of the former command i also tried this: ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -f ffmetadata -i meta.txt SANY0069A.MP4 Output: sample output from my first command: ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -map_metadata 0:0 SANY0069A.MP4 ffmpeg version 0.8.12, Copyright (c) 2000-2011 the FFmpeg developers built on Jun 13 2012 09:57:38 with gcc 4.6.3 20120306 (Red Hat 4.6.3-2) configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --enable-bzlib --enable-libcelt --enable-libdc1394 --enable-libdirac --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-x11grab --enable-avfilter --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect libavutil 51. 9. 1 / 51. 9. 1 libavcodec 53. 8. 0 / 53. 8. 0 libavformat 53. 5. 0 / 53. 5. 0 libavdevice 53. 1. 1 / 53. 1. 1 libavfilter 2. 23. 0 / 2. 23. 0 libswscale 2. 0. 0 / 2. 0. 0 libpostproc 51. 2. 0 / 51. 2. 0 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'SANY0069.MP4': Metadata: major_brand : mp42 minor_version : 1 compatible_brands: mp42avc1 creation_time : 2012-09-29 09:05:50 comment : SANYO DIGITAL CAMERA CA9 comment-eng : SANYO DIGITAL CAMERA CA9 Duration: 00:08:38.71, start: 0.000000, bitrate: 9142 kb/s Stream #0.0(eng): Video: h264 (Constrained Baseline), yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 9007 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc Metadata: creation_time : 2012-09-29 09:05:50 Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 127 kb/s Metadata: creation_time : 2012-09-29 09:05:50 File 'SANY0069A.MP4' already exists. Overwrite ? [y/N] y Output #0, mp4, to 'SANY0069A.MP4': Metadata: major_brand : mp42 minor_version : 1 compatible_brands: mp42avc1 creation_time : 2012-09-29 09:05:50 comment : SANYO DIGITAL CAMERA CA9 comment-eng : SANYO DIGITAL CAMERA CA9 encoder : Lavf53.5.0 Stream #0.0(eng): Video: libx264, yuv420p, 1280x720 [PAR 1:1 DAR 16:9], q=2-31, 9007 kb/s, 30k tbn, 29.97 tbc Metadata: creation_time : 2012-09-29 09:05:50 Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, 127 kb/s Metadata: creation_time : 2012-09-29 09:05:50 Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop, [?] for help frame= 7773 fps=4644 q=-1.0 Lsize= 289607kB time=00:04:19.35 bitrate=9147.4kbits/s video:285416kB audio:4033kB global headers:0kB muxing overhead 0.054571% and finaly, when i compare the ffprobe of the original and the first split part i get the 2 following outputs: original ffprobe version 0.8.12, Copyright (c) 2007-2011 the FFmpeg developers built on Jun 13 2012 09:57:38 with gcc 4.6.3 20120306 (Red Hat 4.6.3-2) configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --enable-bzlib --enable-libcelt --enable-libdc1394 --enable-libdirac --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-x11grab --enable-avfilter --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect libavutil 51. 9. 1 / 51. 9. 1 libavcodec 53. 8. 0 / 53. 8. 0 libavformat 53. 5. 0 / 53. 5. 0 libavdevice 53. 1. 1 / 53. 1. 1 libavfilter 2. 23. 0 / 2. 23. 0 libswscale 2. 0. 0 / 2. 0. 0 libpostproc 51. 2. 0 / 51. 2. 0 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'SANY0069.MP4': Metadata: major_brand : mp42 minor_version : 1 compatible_brands: mp42avc1 creation_time : 2012-09-29 09:05:50 comment : SANYO DIGITAL CAMERA CA9 comment-eng : SANYO DIGITAL CAMERA CA9 Duration: 00:08:38.71, start: 0.000000, bitrate: 9142 kb/s Stream #0.0(eng): Video: h264 (Constrained Baseline), yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 9007 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc Metadata: creation_time : 2012-09-29 09:05:50 Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 127 kb/s Metadata: creation_time : 2012-09-29 09:05:50 Split ffprobe version 0.8.12, Copyright (c) 2007-2011 the FFmpeg developers built on Jun 13 2012 09:57:38 with gcc 4.6.3 20120306 (Red Hat 4.6.3-2) configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --enable-bzlib --enable-libcelt --enable-libdc1394 --enable-libdirac --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-x11grab --enable-avfilter --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect libavutil 51. 9. 1 / 51. 9. 1 libavcodec 53. 8. 0 / 53. 8. 0 libavformat 53. 5. 0 / 53. 5. 0 libavdevice 53. 1. 1 / 53. 1. 1 libavfilter 2. 23. 0 / 2. 23. 0 libswscale 2. 0. 0 / 2. 0. 0 libpostproc 51. 2. 0 / 51. 2. 0 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'SANY0069A.MP4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 creation_time : 1970-01-01 00:00:00 encoder : Lavf53.5.0 comment : SANYO DIGITAL CAMERA CA9 Duration: 00:04:19.37, start: 0.000000, bitrate: 9146 kb/s Stream #0.0(eng): Video: h264 (Constrained Baseline), yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 9015 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 127 kb/s Metadata: creation_time : 1970-01-01 00:00:00 I know this is incredibly long but its actually a quite simple question. I thought it would be best to provide as much detail as possible. any advice here would be great, Thanks

    Read the article

  • Is it possible to gzip and upload this string to Amazon S3 without ever being written to disk?

    - by BigJoe714
    I know this is probably possible using Streams, but I wasn't sure the correct syntax. I would like to pass a string to the Save method and have it gzip the string and upload it to Amazon S3 without ever being written to disk. The current method inefficiently reads/writes to disk in between. The S3 PutObjectRequest has a constructor with InputStream input as an option. import java.io.*; import java.util.zip.GZIPOutputStream; import com.amazonaws.auth.PropertiesCredentials; import com.amazonaws.services.s3.AmazonS3; import com.amazonaws.services.s3.AmazonS3Client; import com.amazonaws.services.s3.model.PutObjectRequest; public class FileStore { public static void Save(String data) throws IOException { File file = File.createTempFile("filemaster-", ".htm"); file.deleteOnExit(); Writer writer = new OutputStreamWriter(new FileOutputStream(file)); writer.write(data); writer.flush(); writer.close(); String zippedFilename = gzipFile(file.getAbsolutePath()); File zippedFile = new File(zippedFilename); zippedFile.deleteOnExit(); AmazonS3 s3 = new AmazonS3Client(new PropertiesCredentials( new FileInputStream("AwsCredentials.properties"))); String bucketName = "mybucket"; String key = "test/" + zippedFile.getName(); s3.putObject(new PutObjectRequest(bucketName, key, zippedFile)); } public static String gzipFile(String filename) throws IOException { try { // Create the GZIP output stream String outFilename = filename + ".gz"; GZIPOutputStream out = new GZIPOutputStream(new FileOutputStream(outFilename)); // Open the input file FileInputStream in = new FileInputStream(filename); // Transfer bytes from the input file to the GZIP output stream byte[] buf = new byte[1024]; int len; while ((len = in.read(buf)) > 0) { out.write(buf, 0, len); } in.close(); // Complete the GZIP file out.finish(); out.close(); return outFilename; } catch (IOException e) { throw e; } } }

    Read the article

  • How to convert a 32bpp image to an indexed format?

    - by Ed Swangren
    So here are the details (I am using C# BTW): I receive a 32bpp image (JPEG compressed) from a server. At some point, I would like to use the Palette property of a bitmap to color over-saturated pixels (brightness 240) red. To do so, I need to get the image into an indexed format. I have tried converting the image to a GIF, but I get quality loss. I have tried creating a new bitmap in an index format by these methods: // causes a "Parameter not valid" error Bitmap indexed = new Bitmap(orig.Width, orig.Height, PixelFormat.Indexed) // no error, but the resulting image is black due to information loss I assume Bitmap indexed = new Bitmap(orig.Width, orig.Height, PixelFormat.Format8bppIndexed) I am at a loss now. The data in this image is changed constantly by the user, so I don't want to manually set pixels that have a brightness 240 if I can avoid it. If I can set the palette once when the image is created, my work is done. If I am going about this the wrong way to begin with please let me know. EDIT: Thanks guys, here is some more detail on what I am attempting to accomplish. We are scanning a tissue slide at high resolution (pathology application). I write the interface to the actual scanner. We use a line-scan camera. To test the line rate of the camera, the user scans a very small portion and looks at the image. The image is displayed next to a track bar. When the user moves the track bar (adjusting line rate), I change the overall intensity of the image in an attempt to model what it would look like at the new line rate. I do this using an ImageAttributes and ColorMatrix object currently. When the user adjusts the track bar, I adjust the matrix. This does not give me per pixel information, but the performance is very nice. I could use LockBits and some unsafe code here, but I would rather not rewrite it if possible. When the new image is created, I would like for all pixels with a brightness value of 240 to be colored red. I was thinking that defining a palette for the bitmap up front would be a clean way of doing this.

    Read the article

  • Web-Frameworks for Education Management Systems?

    - by Indebi
    So, I'm working on an idea and I'll go into a brief overview of that but my question is, What are some good web frameworks for this situation? I have some experience in the following languages: C# Python I have considerably more experience in C# than Python, however I am expecting to learn new things. My idea is this, a completely web-based community-oriented Education Management System that focuses on making students and teachers day-to-day lives easier. For students it will provide a centralized place for them to do homework, study for tests, and reinforce concepts learned previously in class. For teachers it will give them a centralized place to handle assignments, attendance, homework, tests, and all other major parts of classroom management. All of that, but in a community-oriented fashion. Everything a teacher does is shared and open to constructive criticism, allowing other teachers to use their assignments/tests and for students or other teachers to comment, rate and criticize their assignments. This encourages an environment of openness that will allow teacher's to focus on teaching and student's to focus on learning. And that community wouldn't be limited to one school or school-district, this system would be completely school-independent. Please note that I have no problem with hearing constructive criticism on this idea, however I would prefer if this post was more focused on my question. I have somewhat explored about the following options: Django ASP.NET Ruby on Rails Silverlight (1) I have Django installed and I played with it for a little bit, I really like how easy setting up databases are and how it handles the database completely for you. I don't really know how to use it very well and I don't quite understand the Model-View-Controller paradigm(?) for it yet but I haven't thought about it much. I also like the fact that it uses Python. (2) I don't really like Visual Studio for developing in ASP.NET, I hate the way the web-designer works and it just feels clunky and old. I like the server-side development part though. I don't like how expensive ASP.NET and overall Visual Studio is, even if I do get it for free for now using DreamSpark (3) I haven't been able to explore much with this, I could not get Rails (or maybe Ruby) properly installed. I first installed it within RadRails and that didn't work so I uninstalled RadRails and then installed the latest version of Ruby off the official Windows Installer and then installed Ruby on Rails through gem and even after all that it still didn't work, so I installed Netbeans and attempted to use it there but it still did not work (4) I like Silverlight in some extents, I've played with this one the most, it's very similar to WPF (which I've used the most) in a lot of ways but I don't like how database connectivity works, at least in comparison to Django. I also dislike how expensive everything with Microsoft is, even if I get it for free for now with DreamSpark. I would like to hear some suggestions from experienced web-developers as to what I should use and why, or at least what some good options are for my scenario Your help would be very appreciated

    Read the article

  • Html.RadioButtonListFor problem

    - by ognjenb
    <%using (Html.BeginForm("Numbers", "Numbers", FormMethod.Post)) { %> <table id="numbers"> <tr> <th> prvi_br </th> <th> drugi_br </th> <th> treci_br </th> </tr> <%int rb =1; %>" <% foreach (var item in Model) { %> <tr> <td> <%= Html.Encode(item.prvi_br) %> <input type="radio" name="<%= Html.Encode(rb) %>" value="<%= Html.Encode(rb) %>" /> </td> <td> <%= Html.Encode(item.drugi_br) %> <input type="radio" name="<%= Html.Encode(rb) %>" value="<%= Html.Encode(rb) %>"/> </td> <td> <%= Html.Encode(item.treci_br) %> <input type="radio" name="<%= Html.Encode(rb) %>" value="<%= Html.Encode(rb) %>"/> </td> </tr> <% rb++; %> <% } %> </table> <p> <input type="submit" value="Save" /> </p> <%} %> How post this form with only one checked radio button? In my case all of 3 radio buttons is possible to check. How to restrict so that it is possible check only one radio. In this article I found good solutions but it can not be applied because I have a table.

    Read the article

  • stored as array understanding mongoid

    - by Gagan
    Hello frens, This is not a problem, but I just want to know stored as array of Mongoid better. I have following code in my model. class Company include Mongoid::Document include Mongoid::Timestamps references_many :people, :stored_as => :array, :inverse_of => :companies end class Person include Mongoid::Document include Sunspot::Mongoid references_many :companies, :stored_as => :array, :inverse_of => :people end Now in Company object we get person_ids as a result of stored_as array and company_ids in Person object. Now initially I inserted lots of person in company and the ids in person_ids fields is huge. Now I deleted most of person from company down to 8 people. Now I don't get why person_ids fields of Company object storing all the deleted ids of person. My console snapshot is follwing ruby-1.9.2-head Company.first.person_ids = [BSON::ObjectId('4d12d2907adf350695000025'), BSON::ObjectId('4d12d2907adf35069500002c'), BSON::ObjectId('4d12d2907adf350695000035'), BSON::ObjectId('4d12d2907adf35069500003f'), BSON::ObjectId('4d12d2907adf350695000048'), BSON::ObjectId('4d12d2907adf350695000052'), BSON::ObjectId('4d12d2907adf350695000059'), BSON::ObjectId('4d12d2907adf350695000062'), BSON::ObjectId('4d12d4017adf35069500008d'), BSON::ObjectId('4d12d4017adf350695000094'), BSON::ObjectId('4d12d4017adf35069500009d'), BSON::ObjectId('4d12d4017adf3506950000a7'), BSON::ObjectId('4d12d4017adf3506950000b0'), BSON::ObjectId('4d12d4017adf3506950000ba'), BSON::ObjectId('4d12d4017adf3506950000c1'), BSON::ObjectId('4d12d4017adf3506950000ca'), BSON::ObjectId('4d12d48a7adf3506950000f5'), BSON::ObjectId('4d12d48a7adf3506950000fc'), BSON::ObjectId('4d12d48a7adf350695000108'), BSON::ObjectId('4d12d48b7adf350695000115'), BSON::ObjectId('4d12d48b7adf350695000121'), BSON::ObjectId('4d12d48b7adf35069500012e'), BSON::ObjectId('4d12d48b7adf350695000135'), BSON::ObjectId('4d12d48b7adf350695000141'), BSON::ObjectId('4d12d53e7adf35069500016f'), BSON::ObjectId('4d12d53e7adf350695000176'), BSON::ObjectId('4d12d53e7adf350695000182'), BSON::ObjectId('4d12d53e7adf35069500018f'), BSON::ObjectId('4d12d53e7adf35069500019b'), BSON::ObjectId('4d12d53f7adf3506950001a8'), BSON::ObjectId('4d12d53f7adf3506950001af'), BSON::ObjectId('4d12d53f7adf3506950001bb'), BSON::ObjectId('4d12d8587adf3506950001e9'), BSON::ObjectId('4d12d8587adf3506950001f0'), BSON::ObjectId('4d12d8587adf3506950001ff'), BSON::ObjectId('4d12d8597adf35069500020f'), BSON::ObjectId('4d12d8597adf35069500021e'), BSON::ObjectId('4d12d8597adf35069500022e'), BSON::ObjectId('4d12d8597adf350695000235'), BSON::ObjectId('4d12d85a7adf350695000244'), BSON::ObjectId('4d12d9587adf35069500025b'), BSON::ObjectId('4d12db8b7adf35069500026a'), BSON::ObjectId('4d12de6f7adf3509c9000024'), BSON::ObjectId('4d12de6f7adf3509c900002b'), BSON::ObjectId('4d12de6f7adf3509c900003a'), BSON::ObjectId('4d12de707adf3509c900004a'), BSON::ObjectId('4d12de707adf3509c9000059'), BSON::ObjectId('4d12de707adf3509c9000069'), BSON::ObjectId('4d12de707adf3509c9000070'), BSON::ObjectId('4d12de717adf3509c900007f'), BSON::ObjectId('4d12e7f27adf350bd2000009'), BSON::ObjectId('4d12e81f7adf350bd2000015'), BSON::ObjectId('4d12e87f7adf350bd2000024'), BSON::ObjectId('4d12e8b87adf350bd200004c'), BSON::ObjectId('4d12e8b97adf350bd2000053'), BSON::ObjectId('4d12e8b97adf350bd200005c'), BSON::ObjectId('4d12e8b97adf350bd2000066'), BSON::ObjectId('4d12e8b97adf350bd200006f'), BSON::ObjectId('4d12e8b97adf350bd2000079'), BSON::ObjectId('4d12e8ba7adf350bd2000080'), BSON::ObjectId('4d12e8ba7adf350bd2000089'), BSON::ObjectId('4d12ee6b7adf350bd2000198'), BSON::ObjectId('4d12ee6b7adf350bd200019f'), BSON::ObjectId('4d12ee6c7adf350bd20001a5'), BSON::ObjectId('4d12ee6c7adf350bd20001ac'), BSON::ObjectId('4d12ee6c7adf350bd20001b2'), BSON::ObjectId('4d12ee6c7adf350bd20001b9'), BSON::ObjectId('4d12ee6c7adf350bd20001c0'), BSON::ObjectId('4d12ee6c7adf350bd20001c6'), BSON::ObjectId('4d141ca57adf35033e00006e'), BSON::ObjectId('4d141ca57adf35033e000075'), BSON::ObjectId('4d1420aa7adf350705000003'), BSON::ObjectId('4d1420aa7adf35070500000a'), BSON::ObjectId('4d1420f47adf350705000011'), BSON::ObjectId('4d1420f57adf350705000015'), BSON::ObjectId('4d1420f57adf350705000018'), BSON::ObjectId('4d1420f57adf35070500001c'), BSON::ObjectId('4d1420f57adf350705000023'), BSON::ObjectId('4d1420f57adf350705000026'), BSON::ObjectId('4d14215f7adf35070500004b'), BSON::ObjectId('4d14215f7adf350705000052'), BSON::ObjectId('4d14215f7adf350705000055'), BSON::ObjectId('4d14215f7adf350705000059'), BSON::ObjectId('4d14215f7adf35070500005c'), BSON::ObjectId('4d14215f7adf350705000060'), BSON::ObjectId('4d14215f7adf350705000067'), BSON::ObjectId('4d14215f7adf35070500006a')] Company.first.people.collect(&:id) = [BSON::ObjectId('4d14215f7adf35070500004b'), BSON::ObjectId('4d14215f7adf350705000052'), BSON::ObjectId('4d14215f7adf350705000055'), BSON::ObjectId('4d14215f7adf350705000059'), BSON::ObjectId('4d14215f7adf35070500005c'), BSON::ObjectId('4d14215f7adf350705000060'), BSON::ObjectId('4d14215f7adf350705000067'), BSON::ObjectId('4d14215f7adf35070500006a')] Isn't the Company.first.person_ids array be only storing the ids shown by Company.first.people.collect(&:id) It would be helpful if some one tell me when to best use stored_as = :array method. Do stored_as = :array increase querying performance? Thanks

    Read the article

  • Make a compiled binary run at native speed flawlessly without recompiling from source on a another system?

    - by unknownthreat
    I know that many people, at a first glance of the question, may immediately yell out "Java", but no, I know Java's qualities. Allow me to elaborate my question first. Normally, when we want our program to run at a native speed on a system, whether it be Windows, Mac OS X, or Linux, we need to compile from source codes. If you want to run a program of another system in your system, you need to use a virtual machine or an emulator. While these tools allow you to use the program you need on the non-native OS, they sometimes have problems of performance and glitches. We also have a newer compiler called "JIT Compiler", where the compiler will parse the bytecode program to native machine language before execution. The performance may increase to a very good extent with JIT Compiler, but the performance is still not the same as running it on a native system. Another program on Linux, WINE, is also a good tool for running Windows program on Linux system. I have tried running Team Fortress 2 on it, and tried experiment with some settings. I got ~40 fps on Windows at its mid-high setting on 1280 x 1024. On Linux, I need to turn everything low at 1280 x 1024 to get ~40 fps. There are 2 notable things though: Polygon model settings do not seem to affect framerate whether I set it low or high. When there are post-processing effects or some special effects that require manipulation of drawn pixels of the current frame, the framerate will drop to 10-20 fps. From this point, I can see that normal polygon rendering is just fine, but when it comes to newer rendering methods that requires graphic card to the job, it slows down to a crawl. Anyway, this question is rather theoretical. Is there anything we can do at all? I see that WINE can run STEAM and Team Fortress 2. Although there are flaws, they can run at lower setting. Or perhaps, I should also ask, "is it possible to translate one whole program on a system to another system without recompiling from source and get native speed?" I see that we also have AOT Compiler, is it possible to use it for something like this? Or there are so many constraints (such as DirectX call or differences in software architecture) that make it impossible to have a flawless and not native to the system program that runs at native speed?

    Read the article

  • Understanding SingleTableEntityPersister n QueryLoader

    - by Iapilgrim
    Hi, I have the Hibernate model @Cache(usage = CacheConcurrencyStrategy.NONE, region = SitesConstants.CACHE_REGION) public class Node extends StatefulEntity implements Inheritable, Cloneable { private Node _parent; private List<Node> _childNodes; .. } @Cache(usage = CacheConcurrencyStrategy.NONE, region = SitesConstants.CACHE_REGION) public class Page extends Node implements Defaultable, Securable { private RootZone _rootZone; ...... @OneToOne(fetch = FetchType.LAZY) @JoinColumn(name = "root_zone_id", insertable = false, updatable = false) public RootZone getRootZone() { return _rootZone; } public void setRootZone(RootZone rootZone) { if (rootZone != null) { rootZone.setPageId(this.getId()); _rootZone = rootZone; } } I want to get all pages ( call getSiteTree), so I using this query String hpql = "SELECT n FROM Node n "; See the trace I find Page.setRootZone(RootZone) line: 155 NativeMethodAccessorImpl.invoke0(Method, Object, Object[]) line: not available [native method] NativeMethodAccessorImpl.invoke(Object, Object[]) line: 39 DelegatingMethodAccessorImpl.invoke(Object, Object[]) line: 25 Method.invoke(Object, Object...) line: 597 BasicPropertyAccessor$BasicSetter.set(Object, Object, SessionFactoryImplementor) line: 66 PojoEntityTuplizer(AbstractEntityTuplizer).setPropertyValues(Object, Object[]) line: 352 PojoEntityTuplizer.setPropertyValues(Object, Object[]) line: 232 SingleTableEntityPersister(AbstractEntityPersister).setPropertyValues(Object, Object[], EntityMode) line: 3580 TwoPhaseLoad.initializeEntity(Object, boolean, SessionImplementor, PreLoadEvent, PostLoadEvent) line: 152 QueryLoader(Loader).initializeEntitiesAndCollections(List, Object, SessionImplementor, boolean) line: 877 QueryLoader(Loader).doQuery(SessionImplementor, QueryParameters, boolean) line: 752 QueryLoader(Loader).doQueryAndInitializeNonLazyCollections(SessionImplementor, QueryParameters, boolean) line: 259 QueryLoader(Loader).doList(SessionImplementor, QueryParameters) line: 2232 QueryLoader(Loader).listIgnoreQueryCache(SessionImplementor, QueryParameters) line: 2129 QueryLoader(Loader).list(SessionImplementor, QueryParameters, Set, Type[]) line: 2124 QueryLoader.list(SessionImplementor, QueryParameters) line: 401 QueryTranslatorImpl.list(SessionImplementor, QueryParameters) line: 363 HQLQueryPlan.performList(QueryParameters, SessionImplementor) line: 196 SessionImpl.list(String, QueryParameters) line: 1149 QueryImpl.list() line: 102 QueryImpl.getResultList() line: 67 NodeDaoImpl.getSiteTree(long) line: 358 PageNodeServiceImpl.getSiteTree(long) line: 797 NativeMethodAccessorImpl.invoke0(Method, Object, Object[]) line: not available [native method] NativeMethodAccessorImpl.invoke(Object, Object[]) line: 39 DelegatingMethodAccessorImpl.invoke(Object, Object[]) line: 25 Method.invoke(Object, Object...) line: 597 AopUtils.invokeJoinpointUsingReflection(Object, Method, Object[]) line: 307 JdkDynamicAopProxy.invoke(Object, Method, Object[]) line: 198 $Proxy100.getSiteTree(long) line: not available the calling setRootZone in Page makes Hibernate issue a hit to database. I don't want this. So my question is + Why query String hpql = "SELECT n FROM Node n "; issues un-expected trace logs like above. Why the query String hpql = "SELECT n.nodename FROM Node n " not? What is the mechanism behind? Note: Im using hibernate caching level 2. In case I don't want to see that trace logs. I mean I just get Node data only. How to do ? Thanks for your help. Sorry for my bad english :( Van

    Read the article

  • Under what circumstances would a LINQ-to-SQL Entity "lose" a changed field?

    - by John Rudy
    I'm going nuts over what should be a very simple situation. In an ASP.NET MVC 2 app (not that I think this matters), I have an edit action which takes a very small entity and makes a few changes. The key portion (outside of error handling/security) looks like this: Todo t = Repository.GetTodoByID(todoID); UpdateModel(t); Repository.Save(); Todo is the very simple, small entity with the following fields: ID (primary key), FolderID (foreign key), PercentComplete, TodoText, IsDeleted and SaleEffortID (foreign key). Each of these obviously corresponds to a field in the database. When UpdateModel(t) is called, t does get correctly updated for all fields which have changed. When Repository.Save() is called, by the time the SQL is written out, FolderID reverts back to its original value. The complete code to Repository.Save(): public void Save() { myDataContext.SubmitChanges(); } myDataContext is an instance of the DataContext class created by the LINQ-to-SQL designer. Nothing custom has been done to this aside from adding some common interfaces to some of the entities. I've validated that the FolderID is getting lost before the call to Repository.Save() by logging out the generated SQL: UPDATE [Todo].[TD_TODO] SET [TD_PercentComplete] = @p4, [TD_TodoText] = @p5, [TD_IsDeleted] = @p6 WHERE ([TD_ID] = @p0) AND ([TD_TDF_ID] = @p1) AND /* Folder ID */ ([TD_PercentComplete] = @p2) AND ([TD_TodoText] = @p3) AND (NOT ([TD_IsDeleted] = 1)) AND ([TD_SE_ID] IS NULL) /* SaleEffort ID */ -- @p0: Input BigInt (Size = -1; Prec = 0; Scale = 0) [5] -- @p1: Input BigInt (Size = -1; Prec = 0; Scale = 0) [1] /* this SHOULD be 4 and in the update list */ -- @p2: Input TinyInt (Size = -1; Prec = 0; Scale = 0) [90] -- @p3: Input NVarChar (Size = 4000; Prec = 0; Scale = 0) [changing text] -- @p4: Input TinyInt (Size = -1; Prec = 0; Scale = 0) [0] -- @p5: Input NVarChar (Size = 4000; Prec = 0; Scale = 0) [changing text foo] -- @p6: Input Bit (Size = -1; Prec = 0; Scale = 0) [True] -- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 4.0.30319.1 So somewhere between UpdateModel(t) (where I've validated in the debugger that FolderID updated) and the output of this SQL, the FolderID reverts. The other fields all save. (Well, OK, I haven't validated SaleEffortID yet, because that subsystem isn't really ready yet, but everything else saves.) I've exhausted my own means of research on this: Does anyone know of conditions which would cause a partial entity reset (EG, something to do with long foreign keys?), and/or how to work around this?

    Read the article

  • Unrequired property keeps getting data-val-required attribute

    - by frennky
    This is the model with it's validation: [MetadataType(typeof(TagValidation))] public partial class Tag { } public class TagValidation { [Editable(false)] [HiddenInput(DisplayValue = false)] public int TagId { get; set; } [Required] [StringLength(20)] [DataType(DataType.Text)] public string Name { get; set; } //... } Here is the view: <h2>Create</h2> <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script> @using (Html.BeginForm()) { @Html.ValidationSummary(true) <fieldset> <legend>Tag</legend> <div>@Html.EditorForModel()</div> <p> <input type="submit" value="Create" /> </p> </fieldset> } <div> @Html.ActionLink("Back to List", "Index") </div> And here is what get's renderd: <form action="/Tag/Create" method="post"> <fieldset> <legend>Tag</legend> <div><input data-val="true" data-val-number="The field TagId must be a number." data-val-required="The TagId field is required." id="TagId" name="TagId" type="hidden" value="" /> <div class="editor-label"><label for="Name">Name</label></div> <div class="editor-field"><input class="text-box single-line" data-val="true" data-val-length="The field Name must be a string with a maximum length of 20." data-val-length-max="20" data-val-required="The Name field is required." id="Name" name="Name" type="text" value="" /> <span class="field-validation-valid" data-valmsg-for="Name" data-valmsg-replace="true"></span></div> ... </fieldset> </form> The problem is that TagId validation gets generated althoug thare is no Required attribute set on TagId property. Because of that I can't even pass the client-side validation in order to create new Tag in db. What am I missing?

    Read the article

  • why is this rails association loading individually after an eager load?

    - by codeman73
    I'm trying to avoid the N+1 queries problem with eager loading, but it's not working. The associated models are still being loaded individually. Here are the relevant ActiveRecords and their relationships: class Player < ActiveRecord::Base has_one :tableau end Class Tableau < ActiveRecord::Base belongs_to :player has_many :tableau_cards has_many :deck_cards, :through => :tableau_cards end Class TableauCard < ActiveRecord::Base belongs_to :tableau belongs_to :deck_card, :include => :card end class DeckCard < ActiveRecord::Base belongs_to :card has_many :tableaus, :through => :tableau_cards end class Card < ActiveRecord::Base has_many :deck_cards end and the query I'm using is inside this method of Player: def tableau_contains(card_id) self.tableau.tableau_cards = TableauCard.find :all, :include => [ {:deck_card => (:card)}], :conditions => ['tableau_cards.tableau_id = ?', self.tableau.id] contains = false for tableau_card in self.tableau.tableau_cards # my logic here, looking at attributes of the Card model, with # tableau_card.deck_card.card; # individual loads of related Card models related to tableau_card are done here end return contains end Does it have to do with scope? This tableau_contains method is down a few method calls in a larger loop, where I originally tried doing the eager loading because there are several places where these same objects are looped through and examined. Then I eventually tried the code as it is above, with the load just before the loop, and I'm still seeing the individual SELECT queries for Card inside the tableau_cards loop in the log. I can see the eager-loading query with the IN clause just before the tableau_cards loop as well. EDIT: additional info below with the larger, outer loop Here's the larger loop. It is inside an observer on after_save def after_save(pa) @game = Game.find(turn.game_id, :include => :goals) @game.players = Player.find :all, :include => [ {:tableau => (:tableau_cards)}, :player_goals ], :conditions => ['players.game_id =?', @game.id] for player in @game.players player.tableau.tableau_cards = TableauCard.find :all, :include => [ {:deck_card => (:card)}], :conditions => ['tableau_cards.tableau_id = ?', player.tableau.id] if(player.tableau_contains(card)) ... end end end

    Read the article

  • ASP.NET MVC grid/table

    - by nivlam
    public class Person { public string First { get; set; } public string Last { get; set; } public int Age { get; set; } public IEnumerable<Child> Children { get; set; } } public class Child { public string First { get; set; } public string Last { get; set; } public int Age { get; set; } } I'm searching for a way to render a table from my model, which is of type IEnumerable<Person>. I'm trying to generate the following table: <table> <tr class="person"> <td>First 1</td> <td>Last 1</td> <td>1</td> </tr> <tr class="child"> <td>First 1</td> <td>Last 1</td> <td>1</td> </tr> <tr class="child"> <td>First 2</td> <td>Last 2</td> <td>2</td> </tr> ... ... </table> Each person is a row and each of their children would be individual rows under the person row. This would repeat for each person in IEnumerable<Person>. Are there any grids or components that generate a table like this? I found MvcContrib's grid component, but it doesn't appear to be able to generate these child rows. Is there a way to extend MvcContrib's grid to do this?

    Read the article

  • Array values disappear in PHP SoapClient call to Cisco phone system.

    - by Jamin
    I am attempting to consume a SOAP service provided by our Cisco phone system (documentation), to get the current status of a given set of phones. I have an array of phone names, which I'm trying to pass to the service, however, the values of the array are being eaten somewhere Array of items like so: $items = array( 0 => "SEP0004F2E57F8C", 1 => "SEP001111BF8758", 2 => "SEP001320BD485C" ); Attempting to call the method: $client = new SoapClient( "https://x.x.x.x/realtimeservice/services/RisPort?wsdl", array( "login" => "admin", "password"=> "xxxxx", "trace" => true ) ); $devices = $client->SelectCmDevice( "", array( "SelectBy" => "Name", "Status" => "Any", "SelectedItems" => $items ) ); When I debug the complete request I get the following: <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope mlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="http://schemas.cisco.com/ast/soap/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"> <SOAP-ENV:Body> <ns1:SelectCmDevice> <StateInfo xsi:type="xsd:string"></StateInfo> <CmSelectionCriteria xsi:type="ns1:CmSelectionCriteria"> <MaxReturnedDevices xsi:nil="true"/> <Class xsi:nil="true"/> <Model xsi:nil="true"/> <Status xsi:type="xsd:string">Any</Status> <NodeName xsi:nil="true"/> <SelectBy xsi:type="xsd:string">Name</SelectBy> <SelectItems SOAP-ENC:arrayType="ns1:SelectItem[3]" xsi:type="ns1:SelectItems"> <item xsi:type="ns1:SelectItem"/> <item xsi:type="ns1:SelectItem"/> <item xsi:type="ns1:SelectItem"/> </SelectItems> </CmSelectionCriteria> </ns1:SelectCmDevice> </SOAP-ENV:Body> </SOAP-ENV:Envelope> The correct number of <Item elements were counted and inserted into the <SelectItems object, however, the actual item names themselves are gone. I would guess it needs to be <ItemSEP0004F2E57F8C</Item, etc., but I can't seem to figure out how to make it do that. Thank you in advance for any help!!!

    Read the article

  • Repeating fields in similar database tables

    - by user1738833
    I have been tasked with working on a database that I have never seen before and I'm looking at the DB structure. Some of the central and most heavily queried and joined tables look like virtual duplicates of each other. Here's a massively simplified representation of the situation, with business-sensitive information changed, listing hypothetical table names and fields: TopLevelGroup: PK_TLGroupId, DisplaysXOnBill, DisplaysYOnBill, IsInvoicedForJ, IsInvoicedForK SubGroup: PK_SubGroupId, FK_ParentTopLevelGroupId, DisplaysXOnBill, DisplaysYOnBill, IsInvoicedForJ, IsInvoicedForK SubSubGroup: PK_SubSUbGroupId, FK_ParentSubGroupId, DisplaysXOnBill, DisplaysYOnBill, IsInvoicedForJ, IsInvoicedForK I haven't listed the types of the fields as I don't think it's particularly important to the situation. In addition, it's worth saying that rather than four repeated fields as in the example above, I'm looking at 86 repeated fields. For the most part, those fields genuinely do represent "facts" about the primary table entity, so it's not automatically wrong for that reason. In addition, the "groups" represented here have a property inheritance relationship. If DisplaysXOnBill is NULL in the SubSubGroup, it takes the value of DisplaysXOnBillfrom it's parent, the SubGroup, and so-on up to the TopLevelGroup. Further, the requirements will never require that the model extends beyond three levels, so there is no need for flexibility in that area. Is there a design smell from several tables which describe very similar entities having almost identical fields? If so, what might be a better design of the example above? I'm using the phrase "design smell" to indicate a possible problem. Of course, in any given situation, a particular design might well be the best solution. I'm looking for a more general answer - wondering what might be wrong with this design and what might be the better design were that the case. Possibly related, but not primary questions: Is this database schema in a reasonably normal form (e.g. to 3NF), insofar as can be told from the information I've provided. I can't see a problem with the requirements of 2NF and 3NF, except in their inheriting the requirements of 1NF. Is 1NF satisfied though? Are repeating groups allowed in different tables? Is there a best-practice method for implementing the inheritance relationship in a database as I require? The method above feels clunky to me because any query on the SubSubGroup necessarily needs to join onto the SubGroup and the TopLevelGroup tables to collect inherited facts, which can make even trivial joins requiring facts from the SubSubGroup table rather long-winded. There are, of course, political considerations to making a relatively large change like this. For the purpose of this question, I'm happy to ignore that fact in the interests of keeping the answers ring-fenced to the technical problem.

    Read the article

  • How to combine designable components with dependency injection

    - by Wim Coenen
    When creating a designable .NET component, you are required to provide a default constructor. From the IComponent documentation: To be a component, a class must implement the IComponent interface and provide a basic constructor that requires no parameters or a single parameter of type IContainer. This makes it impossible to do dependency injection via constructor arguments. (Extra constructors could be provided, but the designer would ignore them.) Some alternatives we're considering: Service Locator Don't use dependency injection, instead use the service locator pattern to acquire dependencies. This seems to be what IComponent.Site.GetService is for. I guess we could create a reusable ISite implementation (ConfigurableServiceLocator?) which can be configured with the necessary dependencies. But how does this work in a designer context? Dependency Injection via properties Inject dependencies via properties. Provide default instances if they are necessary to show the component in a designer. Document which properties need to be injected. Inject dependencies with an Initialize method This is much like injection via properties but it keeps the list of dependencies that need to be injected in one place. This way the list of required dependencies is documented implicitly, and the compiler will assists you with errors when the list changes. Any idea what the best practice is here? How do you do it? edit: I have removed "(e.g. a WinForms UserControl)" since I intended the question to be about components in general. Components are all about inversion of control (see section 8.3.1 of the UMLv2 specification) so I don't think that "you shouldn't inject any services" is a good answer. edit 2: It took some playing with WPF and the MVVM pattern to finally "get" Mark's answer. I see now that visual controls are indeed a special case. As for using non-visual components on designer surfaces, I think the .NET component model is fundamentally incompatible with dependency injection. It appears to be designed around the service locator pattern instead. Maybe this will start to change with the infrastructure that was added in .NET 4.0 in the System.ComponentModel.Composition namespace.

    Read the article

< Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >