Search Results

Search found 6149 results on 246 pages for 'concept mapping'.

Page 11/246 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • IIS7 Handler Mapping Migration from Sites Config to Server Config [migrated]

    - by Danomite
    We have a bunch of sites running with about 8 handler mappings in their web.config files. Unfortunately, they were getting copied site to site every time a new one was added. Now the time has come for me to get these out of all the web.config's and get them into the server's Handler Mappings. If I add the mapping to the the server while it still exists in the web.config, IIS throws an error when you browse to the site. I have a few dozen web.config's to edit here with about 10 mappings in each. Is there a way to add these mappings to the server without having to go in an edit each web.config file manually? Otherwise, every site will be down for a few minutes while I go into each file and remove the handlers. Thanks!

    Read the article

  • Basic shadow mapping fails on NVIDIA card?

    - by James
    Recently I switched from an AMD Radeon HD 6870 card to an (MSI) NVIDIA GTX 670 for performance reasons. I found however that my implementation of shadow mapping in all my applications failed. In a very simple shadow POC project the problem appears to be that the scene being drawn never results in a draw to the depth map and as a result the entire depth map is just infinity, 1.0 (Reading directly from the depth component after draw (glReadPixels) shows every pixel is infinity (1.0), replacing the depth comparison in the shader with a comparison of the depth from the shadow map with 1.0 shadows the entire scene, and writing random values to the depth map and then not calling glClear(GL_DEPTH_BUFFER_BIT) results in a random noisy pattern on the scene elements - from which we can infer that the uploading of the depth texture and comparison within the shader are functioning perfectly.) Since the problem appears almost certainly to be in the depth render, this is the code for that: const int s_res = 1024; GLuint shadowMap_tex; GLuint shadowMap_prog; GLint sm_attr_coord3d; GLint sm_uniform_mvp; GLuint fbo_handle; GLuint renderBuffer; bool isMappingShad = false; //The scene consists of a plane with box above it GLfloat scene[] = { -10.0, 0.0, -10.0, 0.5, 0.0, 10.0, 0.0, -10.0, 1.0, 0.0, 10.0, 0.0, 10.0, 1.0, 0.5, -10.0, 0.0, -10.0, 0.5, 0.0, -10.0, 0.0, 10.0, 0.5, 0.5, 10.0, 0.0, 10.0, 1.0, 0.5, ... }; //Initialize the stuff used by the shadow map generator int initShadowMap() { //Initialize the shadowMap shader program if (create_program("shadow.v.glsl", "shadow.f.glsl", shadowMap_prog) != 1) return -1; const char* attribute_name = "coord3d"; sm_attr_coord3d = glGetAttribLocation(shadowMap_prog, attribute_name); if (sm_attr_coord3d == -1) { fprintf(stderr, "Could not bind attribute %s\n", attribute_name); return 0; } const char* uniform_name = "mvp"; sm_uniform_mvp = glGetUniformLocation(shadowMap_prog, uniform_name); if (sm_uniform_mvp == -1) { fprintf(stderr, "Could not bind uniform %s\n", uniform_name); return 0; } //Create a framebuffer glGenFramebuffers(1, &fbo_handle); glBindFramebuffer(GL_FRAMEBUFFER, fbo_handle); //Create render buffer glGenRenderbuffers(1, &renderBuffer); glBindRenderbuffer(GL_RENDERBUFFER, renderBuffer); //Setup the shadow texture glGenTextures(1, &shadowMap_tex); glBindTexture(GL_TEXTURE_2D, shadowMap_tex); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, s_res, s_res, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); return 0; } //Delete stuff void dnitShadowMap() { //Delete everything glDeleteFramebuffers(1, &fbo_handle); glDeleteRenderbuffers(1, &renderBuffer); glDeleteTextures(1, &shadowMap_tex); glDeleteProgram(shadowMap_prog); } int loadSMap() { //Bind MVP stuff glm::mat4 view = glm::lookAt(glm::vec3(10.0, 10.0, 5.0), glm::vec3(0.0, 0.0, 0.0), glm::vec3(0.0, 1.0, 0.0)); glm::mat4 projection = glm::ortho<float>(-10,10,-8,8,-10,40); glm::mat4 mvp = projection * view; glm::mat4 biasMatrix( 0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.5, 1.0 ); glm::mat4 lsMVP = biasMatrix * mvp; //Upload light source matrix to the main shader programs glUniformMatrix4fv(uniform_ls_mvp, 1, GL_FALSE, glm::value_ptr(lsMVP)); glUseProgram(shadowMap_prog); glUniformMatrix4fv(sm_uniform_mvp, 1, GL_FALSE, glm::value_ptr(mvp)); //Draw to the framebuffer (with depth buffer only draw) glBindFramebuffer(GL_FRAMEBUFFER, fbo_handle); glBindRenderbuffer(GL_RENDERBUFFER, renderBuffer); glBindTexture(GL_TEXTURE_2D, shadowMap_tex); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, shadowMap_tex, 0); glDrawBuffer(GL_NONE); glReadBuffer(GL_NONE); GLenum result = glCheckFramebufferStatus(GL_FRAMEBUFFER); if (GL_FRAMEBUFFER_COMPLETE != result) { printf("ERROR: Framebuffer is not complete.\n"); return -1; } //Draw shadow scene printf("Creating shadow buffers..\n"); int ticks = SDL_GetTicks(); glClear(GL_DEPTH_BUFFER_BIT); //Wipe the depth buffer glViewport(0, 0, s_res, s_res); isMappingShad = true; //DRAW glEnableVertexAttribArray(sm_attr_coord3d); glVertexAttribPointer(sm_attr_coord3d, 3, GL_FLOAT, GL_FALSE, 5*4, scene); glDrawArrays(GL_TRIANGLES, 0, 14*3); glDisableVertexAttribArray(sm_attr_coord3d); isMappingShad = false; glBindFramebuffer(GL_FRAMEBUFFER, 0); printf("Render Sbuf in %dms (GLerr: %d)\n", SDL_GetTicks() - ticks, glGetError()); return 0; } This is the full code for the POC shadow mapping project (C++) (Requires SDL 1.2, SDL-image 1.2, GLEW (1.5) and GLM development headers.) initShadowMap is called, followed by loadSMap, the scene is drawn from the camera POV and then dnitShadowMap is called. I followed this tutorial originally (Along with another more comprehensive tutorial which has disappeared as this guy re-configured his site but used to be here (404).) I've ensured that the scene is visible (as can be seen within the full project) to the light source (which uses an orthogonal projection matrix.) Shader utilities function fine in non-shadow-mapped projects. I should also note that at no point is the GL error state set. What am I doing wrong here and why did this not cause problems on my AMD card? (System: Ubuntu 12.04, Linux 3.2.0-49-generic, 64 bit, with the nvidia-experimental-310 driver package. All other games are functioning fine so it's most likely not a card/driver issue.)

    Read the article

  • Mapping Object Relationships - QuickStart with NHibernate (Part 3)

    - by BobPalmer
    For this third tutorial, we'll be introducing users new to NHibernat to basic object relationships, starting with a simple many-to-one relationship.  I decided that it would make sense to at least get the readers through some basic relationship mapping (including varieties of parent/child and many to many relationships) before diverging into UI, since most folks are looking for enough to bootstrap themsevles into using NHibernate, and this almost always means some kind of relation between their objects. You can find a link to the article at: http://docs.google.com/Doc?docid=0AUP-rKyyUMKhZGczejdxeHZfMjJmM3c3M3Bnbg&hl=en As always, comments, corrections, and suggestions are appreciated! -Bob

    Read the article

  • bump mapping with 2 normal maps

    - by DorkMonstuh
    I was wondering if its actually possible to do bump mapping with 2 normal maps... I have tried doing it this way however I get a function overload on max and dot. uniform sampler2D n_mapTex; uniform sampler2D n_mapTex2; uniform sampler2D refTex; varying mediump vec2 TexCoord; varying mediump float vTime; void main() { mediump vec4 wave = texture2D(n_mapTex, TexCoord - vTime); mediump vec4 wave2 = texture2D(n_mapTex2, TexCoord + vTime); mediump vec4 bump = mix(wave2, wave, 0.5); //this extracts the normals from the combined normal maps mediump vec4 normal = normalize(bump.xyzw * 2.0 - 1.0); //determines light position mediump vec3 lightPos = normalize(vec3(0.0, 1.0, 3.0)); mediump float diffuse = max(dot(normal, lightPos),0.0); gl_FragColor = mix(texture2D(refTex, TexCoord), bump, 0.5); }

    Read the article

  • What would be the equivalent VB.NET code for this C# FluentNHibernate component mapping?

    - by Will Marcouiller
    I'm a C# programmer constrained to write VB.NET code. While exploring NHibernate further for my current client, I encountered FluentNHibernate, which I find real attractive. But now, I wonder how to "translate" this C# code for component mapping into VB.NET code: Component(x => x.Address, m => { m.Map(x => x.Number); m.Map(x => x.Street); m.Map(x => x.PostCode); }); I know from here: Component(Of Client)(Function(c) c.Address, ...) what I miss is how to continue with the brackets in VB.NET, since there's no Begin End keywords or so. EDIT 1: Following Mr. JaredPar instructions, I figured that his solution might work. If we take the time to read his answer, we may notice that we both don't know what the MType is within his solution. I might have found out that the MType is: FluentNHibernate.Mapping.ComponentPart(Of TComponent) Thus, TComponent is, from my understanding, an anonymous type that I shall parameter to use. From this point of view, since I wish to map the properties of my Address object, replacing TComponent in my help method signature seems not to work. Private Sub MapAdresseHelper(Of Adresse)(ByVal a As FluentNHibernate.Mapping.ComponentPart(Of Adresse)) a.Map(Function(m) m.Number) a.Map(Function(m) m.Street).Length(50) a.Map(Function(m) m.PostCode).Length(10) End Sub The error I get is that my Address class doesn't have a property member named Street, for instance. It sees my Address type, it recognizes it, but it seems buggy somehow. I guess VBNET is poorly designed for lambda expressions and is less evolved than C# (Sorry, a bit of frustration due to the constraint of working with it and not being capable of doing things VERY easily done in C#.) Thanks!

    Read the article

  • Using Mapping Models to migrate between Core Data Object Models

    - by westsider
    I have a fairly simply scheme. Essentially, Run <-- Data (where a Run holds a data, e.g., Temperature, sampled from some sort of sensor). Now, it seems that sensors can have more than one measurement (e.g., Temperature and Humidity). So, a single Run could have multiple data samples. Hence, Run <-- Sample and Sample <-- Data. (And for simplicity I am leaving Run <-- Data in place, for now.) If I create a new mapping model, then things generally work - except that no new Samples are created, no relationships are established between Runs and Samples nor between Samples and Datas. I am trying to get mapping model to migrate my model but even the slightest change to the generated mapping model results in Cocoa error 134110. For example, if I take the "Sample" mapping (which has no Source) and set its Source to 'Run' (so that I can set Sample's inverse relationship 'run' appropriately) then the mapping changes its name to "RunToSample". There are two relationships handled in this mapping: data and run. The data property gets set automatically to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "DataToData", $source.dataSet) Following this example, I set the run property to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToRun", $source) Similarly, I set the 'sample' property mapping in RunToRun to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToSample", $source) and the 'sample' property in DataToData to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToSample", $source.run) So, what, I wonder, is going wrong? I have tried various permutations, such as leaving the 'inverse' relationships unspecified. But I continue to get the same error (134110) regardless. I imagine that this is a lot easier than it seems and that I am missing some fundamental but minor piece. I have also tried subclassing NSEntityMigrationPolicy and overriding -createDestinationInstancesForSourceInstance: but these efforts have met with much the same results. Thanks in advance for any pointers or (relevant :-) advice.

    Read the article

  • problems texture mapping in modern OpenGL 3.3 using GLSL #version 150

    - by RubyKing
    Hi all I'm trying to do texture mapping using Modern OpenGL and GLSL 150. The problem is the texture shows but has this weird flicker I can show a video here http://www.youtube.com/watch?v=xbzw_LMxlHw and I have everything setup best I can have my texcords in my vertex array sent up to opengl I have my fragment color set to the texture values and texel values I have my vertex sending the textures cords to texture cordinates to be used in the fragment shader I have my ins and outs setup and I still don't know what I'm missing that could be causing that flicker. here is my code FRAGMENT SHADER #version 150 uniform sampler2D texture; in vec2 texture_coord; varying vec3 texture_coordinate; void main(void){ gl_FragColor = texture(texture, texture_coord); } VERTEX SHADER #version 150 in vec4 position; out vec2 texture_coordinate; out vec2 texture_coord; uniform vec3 translations; void main() { texture_coord = (texture_coordinate); gl_Position = vec4(position.xyz + translations.xyz, 1.0); } Last bit here is my vertex array with texture cordinates GLfloat vVerts[] = { 0.5f, 0.5f, 0.0f, 0.0f, 1.0f , 0.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f}; //tex x and y HERE IS THE ACTUAL FULL SOURCE CODE if you need to see all the code in its fullest glory here is a link to every file http://ideone.com/7kQN3 thank you for your help

    Read the article

  • Fluent nHibernate and mapping IDictionary<DaysOfWeek,IDictionay<int, decimal>> how to?

    - by JS Future Software
    Hello, I have problem with making mapping of classes with propert of type Dictionary and value in it of type Dictionary too, like this: public class Class1 { public virtual int Id { get; set; } public virtual IDictionary<DayOfWeek, IDictionary<int, decimal>> Class1Dictionary { get; set; } } My mapping looks like this: Id(i => i.Id); HasMany(m => m.Class1Dictionary); This doesn't work. The important thing I want have everything in one table not in two. WHet I had maked class from this second IDictionary I heve bigger problem. But first I can try like it is now.

    Read the article

  • Infinite detail inside Perlin noise procedural mapping

    - by Dave Jellison
    I am very new to game development but I was able to scour the internet to figure out Perlin noise enough to implement a very simple 2D tile infinite procedural world. Here's the question and it's more conceptual than code-based in answer, I think. I understand the concept of "I plug in (x, y) and get back from Perlin noise p" (I'll call it p). P will always be the same value for the same (x, y) (as long as the Perlin algorithm parameters haven't changed, like altering number of octaves, et cetera). What I want to do is be able to zoom into a square and be able to generate smaller squares inside of the already generated overhead tile of terrain. Let's say I have a jungle tile for overhead terrain but I want to zoom in and maybe see a small river tile that would only be a creek and not large enough to be a full "big tile" of water in the overhead. Of course, I want the same net effect as a Perlin equation inside a Perlin equation if that makes sense? (aka. I want two people playing the game with the same settings to get the same terrain and details every time). I can conceptually wrap my head around the large tile being based on an "zoomed out" coordinate leaving enough room to drill into but this approach doesn't make sense in my head (maybe I'm wrong). I'm guessing with this approach my overhead terrain would lose all of the cohesiveness delivered by the Perlin. Imagine I calculate (0, 0) as overhead tile 1 and then to the east of that I plug in (50, 0). OK, great, I now have 49 pixels of detail I could then "drill down" into. The issue I have in my head with this approach (without attempting it) is that there's no guarantee from my Perlin noise that (0,0) would be a good neighbor to (50,0) as they could have wildly different "elevations" or p/resultant values returning from the Perlin equation when I generate the overhead map. I think I can conceive of using the Perlin noise for the overhead tile to then reuse the p value as a seed for the "detail" level of noise once I zoom in. That would ensure my detail Perlin is always the same configuration for (0,0), (1,0), etc. ad nauseam but I'm not sure if there are better approaches out there or if this is a sound approach at all.

    Read the article

  • Height Map Mapping to "Chunked" Quadrilateralized Spherical Cube

    - by user3684950
    I have been working on a procedural spherical terrain generator for a few months which has a quadtree LOD system. The system splits the six faces of a quadrilateralized spherical cube into smaller "quads" or "patches" as the player approaches those faces. What I can't figure out is how to generate height maps for these patches. To generate the heights I am using a 3D ridged multi fractals algorithm. For now I can only displace the vertices of the patches directly using the output from the ridged multi fractals. I don't understand how I generate height maps that allow the vertices of a terrain patch to be mapped to pixels in the height map. The only thing I can think of is taking each vertex in a patch, plug that into the RMF and take that position and translate into u,v coordinates then determine the pixel position directly from the u,v coordinates and determine the grayscale color based on the height. I feel as if this is the right approach but there are a few other things that may further complicate my problem. First of all I intend to use "height maps" with a pixel resolution of 192x192 while the vertex "resolution" of each terrain patch is only 16x16 - meaning that I don't have any vertices to sample for the RMF for most of the pixels. The main reason the height map resolution is higher so that I can use it to generate a normal map (otherwise the height maps serve little purpose as I can just directly displace vertices as I currently am). I am pretty much following this paper very closely. This is, essentially, the part I am having trouble with. Using the cube-to-sphere mapping and the ridged multifractal algorithm previously described, a normalized height value ([0, 1]) is calculated. Using this height value, the terrain position is calculated and stored in the first three channels of the positionmap (RGB) – this will be used to calculate the normalmap. The fourth channel (A) is used to store the height value itself, to be used in the heightmap. The steps in the first sentence are my primary problem. I don't understand how the pixel positions correspond to positions on the sphere and what positions are sampled for the RMF to generate the pixels if only vertices cannot be used.

    Read the article

  • Conflict between Change Control and ASL Mapping

    - by Jie Chen
    Yesterday I got one strange report that on Agile 9.3.1.2, adding a Supplier into Item's Supplier tab will always remove all the data from Item.PageTwo.MultiList01 field which is assigned to a User Group list. The detailed problem description is like below. In JavaClient, MultiList01 attribute on Parts class's PageTwo tab is enabled and assigned with User Group list. On WebClient, user created a new Part and assign MultiList01 with two UserGroups: "Global User Group Test1" and "Personal Group_Test1". Then go to Suppliers tab to add three Suppliers. Switch back to Part's TitleBlock, will see MultiList01 loses the User Group data. To confirm if MultiList01 really loses the data or it saves with other wrong data, I need to check the database and find strange data that MultiList01 saves wrong data ",7976911,7976907,7976959,", which are exactly the ID of these three Suppliers. Then I can suspect the Supplier attribute on Suppliers tab must be mapped to MultiList01. However when I check Supplier in JavaClient, the "ASL mapped to" is blank. More interesting thing is the database clearly shows Supplier attribute (Base ID =2000004219) is mapped to 2090, which is PageTwo.MultiList01 Base ID. Till now, we can get a conclusion that Supplier data is really mapped to MultiList01, though we assign MultiList01 to User Group list and Supplier does not set "ASL mapped to". It must be another function which overrides "ASL mapped to" visibility in JavaClient with high priority. That is the "Change Controlled" function. We immediately see "ASL mapped to" with value "MultiList01" when we disable Change Controlled for Multilist01 If one attribute is Change Controlled, Supplier data cannot be mapped to this attribute theoretically because Supplier could be dynamically modified by users, not by Changes. In real situation of Agile 9.3.1.2, it could be a Code Defect. We can imagine the scenario customer met. He setup Parts.PageTwo.Multilist01 assigned with Supplier list, then in Parts.Suppliers.Supplier attribute, he set "ASL mapped to" to "Multilist01". Later company business is changed, so he set Multilist01 with Change Controlled and re-assign with User Group list. He forgot to remove "ASL mapped to" before he did modifications to Multilist01. Finally we know the solution, it depends on real business. If still need to mapping Supplier to Parts.PageTwo attribute, should modify "ASL mapped to" to other one attribute which already has assigned with Suppliers list. If do not need "ASL mapped to" function, should delete the data from database level. We cannot do it from JavaClient UI. delete propertytable where id in (select p.id from propertytable p, nodetable n where p.parentid =n.id and n.inherit=2000004219 and propertyid=794)

    Read the article

  • Bump mapping Problem GLSL

    - by jmfel1926
    I am having a slight problem with my Bump Mapping project. Although everything works OK (at least from what I know) there is a slight mistake somewhere and I get incorrect shading on the brick wall when the light goes to the one side or the other as seen in the picture below: The light is on the right side so the shading on the wall should be the other way. I have provided the shaders to help find the issue (I do not have much experience with shaders). Shaders: varying vec3 viewVec; varying vec3 position; varying vec3 lightvec; attribute vec3 tangent; attribute vec3 binormal; uniform vec3 lightpos; uniform mat4 cameraMat; void main() { gl_TexCoord[0] = gl_MultiTexCoord0; gl_Position = ftransform(); position = vec3(gl_ModelViewMatrix * gl_Vertex); lightvec = vec3(cameraMat * vec4(lightpos,1.0)) - position ; vec3 eyeVec = vec3(gl_ModelViewMatrix * gl_Vertex); viewVec = normalize(-eyeVec); } uniform sampler2D colormap; uniform sampler2D normalmap; varying vec3 viewVec; varying vec3 position; varying vec3 lightvec; vec3 vv; uniform float diffuset; uniform float specularterm; uniform float ambientterm; void main() { vv=viewVec; vec3 normals = normalize(texture2D(normalmap,gl_TexCoord[0].st).rgb * 2.0 - 1.0); normals.y = -normals.y; //normals = (normals * gl_NormalMatrix).xyz ; vec3 distance = lightvec; float dist_number =length(distance); float final_dist_number = 2.0/pow(dist_number,diffuset); vec3 light_dir=normalize(lightvec); vec3 Halfvector = normalize(light_dir+vv); float angle=max(dot(Halfvector,normals),0.0); angle= pow(angle,specularterm); vec3 specular=vec3(angle,angle,angle); float diffuseterm=max(dot(light_dir,normals),0.0); vec3 diffuse = diffuseterm * texture2D(colormap,gl_TexCoord[0].st).rgb; vec3 ambient = ambientterm *texture2D(colormap,gl_TexCoord[0].st).rgb; vec3 diffusefinal = diffuse * final_dist_number; vec3 finalcolor=diffusefinal+specular+ambient; gl_FragColor = vec4(finalcolor, 1.0); }

    Read the article

  • Does this inheritance design belong in the database?

    - by Berryl
    === CLARIFICATION ==== The 'answers' older than March are not answers to the question in this post! Hello In my domain I need to track allocations of time spent on Activities by resources. There are two general types of Activities of interest - ones base on a Project and ones based on an Account. The notion of Project and Account have other features totally unrelated to both each other and capturing allocations of time, and each is modeled as a table in the database. For a given Allocation of time however, it makes sense to not care whether the allocation was made to either a Project or an Account, so an ActivityBase class abstracts away the difference. An ActivityBase is either a ProjectActivity or an AccountingActivity (object model is below). Back to the database though, there is no direct value in having tables for ProjectActivity and AccountingActivity. BUT the Allocation table needs to store something in the column for it's ActivityBase. Should that something be the Id of the Project / Account or a reference to tables for ProjectActivity / Accounting? How would the mapping look? === Current Db Mapping (Fluent) ==== Below is how the mapping currently looks: public class ActivityBaseMap : IAutoMappingOverride<ActivityBase> { public void Override(AutoMapping<ActivityBase> mapping) { //mapping.IgnoreProperty(x => x.BusinessId); //mapping.IgnoreProperty(x => x.Description); //mapping.IgnoreProperty(x => x.TotalTime); mapping.IgnoreProperty(x => x.UniqueId); } } public class AccountingActivityMap : SubclassMap<AccountingActivity> { public void Override(AutoMapping<AccountingActivity> mapping) { mapping.References(x => x.Account); } } public class ProjectActivityMap : SubclassMap<ProjectActivity> { public void Override(AutoMapping<ProjectActivity> mapping) { mapping.References(x => x.Project); } } There are two odd smells here. Firstly, the inheritance chain adds nothing in the way of properties - it simply adapts Projects and Accounts into a common interface so that either can be used in an Allocation. Secondly, the properties in the ActivityBase interface are redundant to keep in the db, since that information is available in Projects and Accounts. Cheers, Berryl ==== Domain ===== public class Allocation : Entity { ... public virtual ActivityBase Activity { get; private set; } ... } public abstract class ActivityBase : Entity { public virtual string BusinessId { get; protected set; } public virtual string Description { get; protected set; } public virtual ICollection<Allocation> Allocations { get { return _allocations.Values; } } public virtual TimeQuantity TotalTime { get { return TimeQuantity.Hours(Allocations.Sum(x => x.TimeSpent.Amount)); } } } public class ProjectActivity : ActivityBase { public virtual Project Project { get; private set; } public ProjectActivity(Project project) { BusinessId = project.Code.ToString(); Description = project.Description; Project = project; } }

    Read the article

  • Spring 3.0: Handler mapping issue

    - by Yaniv Cohen
    I am having a trouble mapping a specific URL request to one of the controllers in my project. the URL is : http://HOSTNAME/api/v1/profiles.json the war which is deployed is: api.war the error I get is the following: [PageNotFound] No mapping found for HTTP request with URI [/api/v1/profiles.json] in DispatcherServlet with name 'action' The configuration I have is the following: web.xml : <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/applicationContext.xml,/WEB-INF/applicationContext-security.xml</param-value> </context-param> <!-- Cache Control filter --> <filter> <filter-name>cacheControlFilter</filter-name> <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class> </filter> <!-- Cache Control filter mapping --> <filter-mapping> <filter-name>cacheControlFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <!-- Spring security filter --> <filter> <filter-name>springSecurityFilterChain</filter-name> <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class> </filter> <!-- Spring security filter mapping --> <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <!-- Spring listener --> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <!-- Spring Controller --> <servlet> <servlet-name>action</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>action</servlet-name> <url-pattern>/v1/*</url-pattern> </servlet-mapping> The action-servlet.xml: <mvc:annotation-driven/> <bean id="contentNegotiatingViewResolver" class="org.springframework.web.servlet.view.ContentNegotiatingViewResolver"> <property name="favorPathExtension" value="true" /> <property name="favorParameter" value="true" /> <!-- default media format parameter name is 'format' --> <property name="ignoreAcceptHeader" value="false" /> <property name="order" value="1" /> <property name="mediaTypes"> <map> <entry key="html" value="text/html"/> <entry key="json" value="application/json" /> <entry key="xml" value="application/xml" /> </map> </property> <property name="viewResolvers"> <list> <bean class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="prefix" value="/WEB-INF/jsp/"/> <property name="suffix" value=".jsp"/> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView" /> </bean> </list> </property> <property name="defaultViews"> <list> <bean class="org.springframework.web.servlet.view.json.MappingJacksonJsonView" /> <bean class="org.springframework.web.servlet.view.xml.MarshallingView"> <constructor-arg> <bean class="org.springframework.oxm.xstream.XStreamMarshaller" /> </constructor-arg> </bean> </list> </property> </bean> the application context security: <sec:http auto-config='true' > <sec:intercept-url pattern="/login.*" filters="none"/> <sec:intercept-url pattern="/oauth/**" access="ROLE_USER" /> <sec:intercept-url pattern="/v1/**" access="ROLE_USER" /> <sec:intercept-url pattern="/request_token_authorized.jsp" access="ROLE_USER" /> <sec:intercept-url pattern="/**" access="ROLE_USER"/> <sec:form-login authentication-failure-url ="/login.html" default-target-url ="/login.html" login-page ="/login.html" login-processing-url ="/login.html" /> <sec:logout logout-success-url="/index.html" logout-url="/logout.html" /> </sec:http> the controller: @Controller public class ProfilesController { @RequestMapping(value = {"/v1/profiles"}, method = {RequestMethod.GET,RequestMethod.POST}) public void getProfilesList(@ModelAttribute("response") Response response) { .... } } the request never reaches this controller. Any ideas?

    Read the article

  • Servlet mapping url patterns

    - by Scobal
    I have the following urls that need mapping to two different servlets. Can anyone suggest a working url-pattern please? vehlocsearch-ws: /ws/vehlocsearch/vehlocsearch /ws/vehavailrate/vehavailratevehlocsearch /ws/vehavailrate/vehavailratevehlocsearch.wsdl vehavailrate-ws: /ws/vehavailrate/vehavailrate /ws/vehavailrate/vehavailratevehavailrate /ws/vehavailrate/vehavailratevehavailrate.wsdl So far I have this, which feels right, but isn't: <servlet-mapping> <servlet-name>vehlocsearch-ws</servlet-name> <url-pattern>*.vehlocsearch*</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>vehavailrate-ws</servlet-name> <url-pattern>*.vehavailrate*</url-pattern> </servlet-mapping> Note: I have no control over the incoming urls

    Read the article

  • Mapping Repeating Sequence Groups in BizTalk

    - by Paul Petrov
    Repeating sequence groups can often be seen in real life XML documents. It happens when certain sequence of elements repeats in the instance document. Here’s fairly abstract example of schema definition that contains sequence group: <xs:schemaxmlns:b="http://schemas.microsoft.com/BizTalk/2003"            xmlns:xs="http://www.w3.org/2001/XMLSchema"            xmlns="NS-Schema1"            targetNamespace="NS-Schema1" >  <xs:elementname="RepeatingSequenceGroups">     <xs:complexType>       <xs:sequencemaxOccurs="1"minOccurs="0">         <xs:sequencemaxOccurs="unbounded">           <xs:elementname="A"type="xs:string" />           <xs:elementname="B"type="xs:string" />           <xs:elementname="C"type="xs:string"minOccurs="0" />         </xs:sequence>       </xs:sequence>     </xs:complexType>  </xs:element> </xs:schema> And here’s corresponding XML instance document: <ns0:RepeatingSequenceGroupsxmlns:ns0="NS-Schema1">  <A>A1</A>  <B>B1</B>  <C>C1</C>  <A>A2</A>  <B>B2</B>  <A>A3</A>  <B>B3</B>  <C>C3</C> </ns0:RepeatingSequenceGroups> As you can see elements A, B, and C are children of anonymous xs:sequence element which in turn can be repeated N times. Let’s say we need do simple mapping to the schema with similar structure but with different element names: <ns0:Destinationxmlns:ns0="NS-Schema2">  <Alpha>A1</Alpha>  <Beta>B1</Beta>  <Gamma>C1</Gamma>  <Alpha>A2</Alpha>  <Beta>B2</Beta>  <Gamma>C2</Gamma> </ns0:Destination> The basic map for such typical task would look pretty straightforward: If we test this map without any modification it will produce following result: <ns0:Destinationxmlns:ns0="NS-Schema2">  <Alpha>A1</Alpha>  <Alpha>A2</Alpha>  <Alpha>A3</Alpha>  <Beta>B1</Beta>  <Beta>B2</Beta>  <Beta>B3</Beta>  <Gamma>C1</Gamma>  <Gamma>C3</Gamma> </ns0:Destination> The original order of the elements inside sequence is lost and that’s not what we want. Default behavior of the BizTalk 2009 and 2010 Map Editor is to generate compatible map with older versions that did not have ability to preserve sequence order. To enable this feature simply open map file (*.btm) in text/xml editor and find attribute PreserveSequenceOrder of the root <mapsource> element. Set its value to Yes and re-test the map: <ns0:Destinationxmlns:ns0="NS-Schema2">  <Alpha>A1</Alpha>  <Beta>B1</Beta>  <Gamma>C1</Gamma>  <Alpha>A2</Alpha>  <Beta>B2</Beta>  <Alpha>A3</Alpha>  <Beta>B3</Beta>  <Gamma>C3</Gamma> </ns0:Destination> The result is as expected – all corresponding elements are in the same order as in the source document. Under the hood it is achieved by using one common xsl:for-each statement that pulls all elements in original order (rather than using individual for-each statement per element name in default mode) and xsl:if statements to test current element in the loop:  <xsl:templatematch="/s0:RepeatingSequenceGroups">     <ns0:Destination>       <xsl:for-eachselect="A|B|C">         <xsl:iftest="local-name()='A'">           <Alpha>             <xsl:value-ofselect="./text()" />           </Alpha>         </xsl:if>         <xsl:iftest="local-name()='B'">           <Beta>             <xsl:value-ofselect="./text()" />           </Beta>         </xsl:if>         <xsl:iftest="local-name()='C'">           <Gamma>             <xsl:value-ofselect="./text()" />           </Gamma>         </xsl:if>       </xsl:for-each>     </ns0:Destination>  </xsl:template> BizTalk Map editor became smarter so learn and use this lesser known feature of XSLT 2.0 in your maps and XSL stylesheets.

    Read the article

  • What is wrong with the following Fluent NHibernate Mapping ?

    - by ashraf
    Hi, I have 3 tables (Many to Many relationship) Resource {ResourceId, Description} Role {RoleId, Description} Permission {ResourceId, RoleId} I am trying to map above tables in fluent-nHibernate. This is what I am trying to do. var aResource = session.Get<Resource>(1); // 2 Roles associated (Role 1 and 2) var aRole = session.Get<Role>(1); aResource.Remove(aRole); // I try to delete just 1 role from permission. But the sql generated here is (which is wrong) Delete from Permission where ResourceId = 1 Insert into Permission (ResourceId, RoleId) values (1, 2); Instead of (right way) Delete from Permission where ResourceId = 1 and RoleId = 1 Why nHibernate behave like this? What wrong with the mapping? I even tried with Set instead of IList. Here is the full code. Entities public class Resource { public virtual string Description { get; set; } public virtual int ResourceId { get; set; } public virtual IList<Role> Roles { get; set; } public Resource() { Roles = new List<Role>(); } } public class Role { public virtual string Description { get; set; } public virtual int RoleId { get; set; } public virtual IList<Resource> Resources { get; set; } public Role() { Resources = new List<Resource>(); } } Mapping Here // Mapping .. public class ResourceMap : ClassMap<Resource> { public ResourceMap() { Id(x => x.ResourceId); Map(x => x.Description); HasManyToMany(x => x.Roles).Table("Permission"); } } public class RoleMap : ClassMap<Role> { public RoleMap() { Id(x => x.RoleId); Map(x => x.Description); HasManyToMany(x => x.Resources).Table("Permission"); } } Program static void Main(string[] args) { var factory = CreateSessionFactory(); using (var session = factory.OpenSession()) { using (var tran = session.BeginTransaction()) { var aResource = session.Get<Resource>(1); var aRole = session.Get<Role>(1); aResource.Remove(aRole); session.Save(a); session.Flush(); tran.Commit(); } } } private static ISessionFactory CreateSessionFactory() { return Fluently.Configure() .Database(MsSqlConfiguration.MsSql2008 .ConnectionString("server=(local);database=Store;Integrated Security=SSPI")) .Mappings(m => m.FluentMappings.AddFromAssemblyOf<Program>() .Conventions.Add<CustomForeignKeyConvention>()) .BuildSessionFactory(); } public class CustomForeignKeyConvention : ForeignKeyConvention { protected override string GetKeyName(FluentNHibernate.Member property, Type type) { return property == null ? type.Name + "Id" : property.Name + "Id"; } } Thanks, Ashraf.

    Read the article

  • Underlying Concept Behind Keyboard Mappings

    - by ajay
    I am frustrated with key mapping issues. On my Linux box, if I type Home/End in Vim, then the cursor actually moves to the beginning/end of the line. On my Mac when I am on TextEdit, if I do Fn + Left or Fn + Right, it takes me to beginning/end of the line. But if I am on Vim on my Mac terminal, then the same key combinations don't work. Why? I see online all the different cryptic settings that I have to paste in .vimrc to make this work, but I can't find any explanation for those cryptic map, imap settings. What is the underlying issue here, and how can I fix it? Thanks!

    Read the article

  • how to write or what is the concept behind the file unlocker program

    - by Jach Many
    Recently i was trying to delete a file thinking that i had closed the program which is manipulating the file but it did not delete because the program was still running. I misunderstood that file as an unwanted file. So i used the file unlocker program to find which process is manipulating that file. That program really worked well by showing the process which was handling that file. and that file was http://download.cnet.com/Unlocker/3000-2248_4-10493998.html. What i want to know is i would like to write a program in win32 C or .net to mimic the same process. Just to find which process is handling which file. and if possible to close it. Or i want to know the concept behind that. I know this cannot be explained in a few paragraphs yet if i could get some references or external links to references then that could be nice.

    Read the article

  • Is there concept of panel in BlackBerry

    - by user469999
    Hi, As we have the concepts of panel in Swings and AWT to which we can add controls.Do we have any concept of panel in BlackBerry. .Can any one please tell me about it.i have to create icons in BB and place it at the bottom of the screen to give a look of toolbar.I know we have the Toolbar package in BB 6.0 but then as all the simulators(like 8520 curve) doesnt support it i dont want to use that. Please help me to understand is there any concept of panel in BB as in AWT and swings Thanks in advance Yogesh Chaudhari

    Read the article

  • Hardest concept to grasp as a beginner

    - by noizetoys
    When you were starting to program, what was the hardest concept for you to grasp? Was it recursion, pointers, linked lists, assignments, memory management? I was wondering what gave you headaches and how you overcame this issue and learned to love the bomb, I mean understand it. EDIT: As a followup, what helped you grok your hard-to-grasp concept?

    Read the article

  • Fluent Nhibernate - Mapping child in parent when Child has reference to parent and not using a list

    - by Josh
    I have a child object in the database that looks like this: CREATE TABLE Child ( ChildId uniqueidentifier not null, ParentId uniqueidentifier not null ) An then I have a parent like so. CREATE TABLE Parent ( ParentId uniqueidentifier not null ) Now, the problem is that in my Parent class, I have public virtual Child Child { get; set; } I've tried references, hasone, referencesany and can't seem to get the mapping right. Anyone have any ideas? Thanks,

    Read the article

  • Fraud Detection with the SQL Server Suite Part 2

    - by Dejan Sarka
    This is the second part of the fraud detection whitepaper. You can find the first part in my previous blog post about this topic. My Approach to Data Mining Projects It is impossible to evaluate the time and money needed for a complete fraud detection infrastructure in advance. Personally, I do not know the customer’s data in advance. I don’t know whether there is already an existing infrastructure, like a data warehouse, in place, or whether we would need to build one from scratch. Therefore, I always suggest to start with a proof-of-concept (POC) project. A POC takes something between 5 and 10 working days, and involves personnel from the customer’s site – either employees or outsourced consultants. The team should include a subject matter expert (SME) and at least one information technology (IT) expert. The SME must be familiar with both the domain in question as well as the meaning of data at hand, while the IT expert should be familiar with the structure of data, how to access it, and have some programming (preferably Transact-SQL) knowledge. With more than one IT expert the most time consuming work, namely data preparation and overview, can be completed sooner. I assume that the relevant data is already extracted and available at the very beginning of the POC project. If a customer wants to have their people involved in the project directly and requests the transfer of knowledge, the project begins with training. I strongly advise this approach as it offers the establishment of a common background for all people involved, the understanding of how the algorithms work and the understanding of how the results should be interpreted, a way of becoming familiar with the SQL Server suite, and more. Once the data has been extracted, the customer’s SME (i.e. the analyst), and the IT expert assigned to the project will learn how to prepare the data in an efficient manner. Together with me, knowledge and expertise allow us to focus immediately on the most interesting attributes and identify any additional, calculated, ones soon after. By employing our programming knowledge, we can, for example, prepare tens of derived variables, detect outliers, identify the relationships between pairs of input variables, and more, in only two or three days, depending on the quantity and the quality of input data. I favor the customer’s decision of assigning additional personnel to the project. For example, I actually prefer to work with two teams simultaneously. I demonstrate and explain the subject matter by applying techniques directly on the data managed by each team, and then both teams continue to work on the data overview and data preparation under our supervision. I explain to the teams what kind of results we expect, the reasons why they are needed, and how to achieve them. Afterwards we review and explain the results, and continue with new instructions, until we resolve all known problems. Simultaneously with the data preparation the data overview is performed. The logic behind this task is the same – again I show to the teams involved the expected results, how to achieve them and what they mean. This is also done in multiple cycles as is the case with data preparation, because, quite frankly, both tasks are completely interleaved. A specific objective of the data overview is of principal importance – it is represented by a simple star schema and a simple OLAP cube that will first of all simplify data discovery and interpretation of the results, and will also prove useful in the following tasks. The presence of the customer’s SME is the key to resolving possible issues with the actual meaning of the data. We can always replace the IT part of the team with another database developer; however, we cannot conduct this kind of a project without the customer’s SME. After the data preparation and when the data overview is available, we begin the scientific part of the project. I assist the team in developing a variety of models, and in interpreting the results. The results are presented graphically, in an intuitive way. While it is possible to interpret the results on the fly, a much more appropriate alternative is possible if the initial training was also performed, because it allows the customer’s personnel to interpret the results by themselves, with only some guidance from me. The models are evaluated immediately by using several different techniques. One of the techniques includes evaluation over time, where we use an OLAP cube. After evaluating the models, we select the most appropriate model to be deployed for a production test; this allows the team to understand the deployment process. There are many possibilities of deploying data mining models into production; at the POC stage, we select the one that can be completed quickly. Typically, this means that we add the mining model as an additional dimension to an existing DW or OLAP cube, or to the OLAP cube developed during the data overview phase. Finally, we spend some time presenting the results of the POC project to the stakeholders and managers. Even from a POC, the customer will receive lots of benefits, all at the sole risk of spending money and time for a single 5 to 10 day project: The customer learns the basic patterns of frauds and fraud detection The customer learns how to do the entire cycle with their own people, only relying on me for the most complex problems The customer’s analysts learn how to perform much more in-depth analyses than they ever thought possible The customer’s IT experts learn how to perform data extraction and preparation much more efficiently than they did before All of the attendees of this training learn how to use their own creativity to implement further improvements of the process and procedures, even after the solution has been deployed to production The POC output for a smaller company or for a subsidiary of a larger company can actually be considered a finished, production-ready solution It is possible to utilize the results of the POC project at subsidiary level, as a finished POC project for the entire enterprise Typically, the project results in several important “side effects” Improved data quality Improved employee job satisfaction, as they are able to proactively contribute to the central knowledge about fraud patterns in the organization Because eventually more minds get to be involved in the enterprise, the company should expect more and better fraud detection patterns After the POC project is completed as described above, the actual project would not need months of engagement from my side. This is possible due to our preference to transfer the knowledge onto the customer’s employees: typically, the customer will use the results of the POC project for some time, and only engage me again to complete the project, or to ask for additional expertise if the complexity of the problem increases significantly. I usually expect to perform the following tasks: Establish the final infrastructure to measure the efficiency of the deployed models Deploy the models in additional scenarios Through reports By including Data Mining Extensions (DMX) queries in OLTP applications to support real-time early warnings Include data mining models as dimensions in OLAP cubes, if this was not done already during the POC project Create smart ETL applications that divert suspicious data for immediate or later inspection I would also offer to investigate how the outcome could be transferred automatically to the central system; for instance, if the POC project was performed in a subsidiary whereas a central system is available as well Of course, for the actual project, I would repeat the data and model preparation as needed It is virtually impossible to tell in advance how much time the deployment would take, before we decide together with customer what exactly the deployment process should cover. Without considering the deployment part, and with the POC project conducted as suggested above (including the transfer of knowledge), the actual project should still only take additional 5 to 10 days. The approximate timeline for the POC project is, as follows: 1-2 days of training 2-3 days for data preparation and data overview 2 days for creating and evaluating the models 1 day for initial preparation of the continuous learning infrastructure 1 day for presentation of the results and discussion of further actions Quite frequently I receive the following question: are we going to find the best possible model during the POC project, or during the actual project? My answer is always quite simple: I do not know. Maybe, if we would spend just one hour more for data preparation, or create just one more model, we could get better patterns and predictions. However, we simply must stop somewhere, and the best possible way to do this, according to my experience, is to restrict the time spent on the project in advance, after an agreement with the customer. You must also never forget that, because we build the complete learning infrastructure and transfer the knowledge, the customer will be capable of doing further investigations independently and improve the models and predictions over time without the need for a constant engagement with me.

    Read the article

  • Ignore mapping when in text input box

    - by Art
    I have AutoHotkey set up in a way that is recognises Cmd-Left and Cmd-Right keystrokes as back/forward navigation in Chrome. Problem is that it also recognises those keys being pressed while I am entering text in textboxes. However when entering text, I'd would like those key combinations to carry out different function - jump to beginning/end of the line, similar to Ctrl-Left/Right. Is there a way to have one mapping working for text boxes and another mapping for everything else in AutoHotkey?

    Read the article

  • Mapping Your Data with Bing Maps and SQL Server 2008 – Part 1

    Jonas Stawski takes you step by step through a sample project that demonstrates how to create an application that can get GeoSpatial coordinate data for addresses within a SQL Server database, and then use that data to locate those addresses on a Bing Map on a website as pushpins, either grouped or ungrouped: And there is full source-code too, in the speech-bubble.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >