Search Results

Search found 6149 results on 246 pages for 'concept mapping'.

Page 135/246 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • How to Change System Application Pages (Like AccessDenied.aspx, Signout.aspx etc)

    - by Jayant Sharma
    An advantage of SharePoint 2010 over SharePoint 2007 is, we can programatically change the URL of System Application Pages. For Example, It was not very easy to change the URL of AccessDenied.aspx page in SharePoint 2007 but in SharePoint 2010 we can easily change the URL with just few lines of code. For this purpose we have two methods available: GetMappedPage UpdateMappedPage: returns true if the custom application page is successfully mapped; otherwise, false. You can override following pages. Member name Description None AccessDenied Specifies AccessDenied.aspx. Confirmation Specifies Confirmation.aspx. Error Specifies Error.aspx. Login Specifies Login.aspx. RequestAccess Specifies ReqAcc.aspx. Signout Specifies SignOut.aspx. WebDeleted Specifies WebDeleted.aspx. ( http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.administration.spwebapplication.spcustompage.aspx ) So now Its time to implementation, using (SPSite site = new SPSite(http://testserver01)) {   //Get a reference to the web application.   SPWebApplication webApp = site.WebApplication;   webApp.UpdateMappedPage(SPWebApplication.SPCustomPage.AccessDenied, "/_layouts/customPages/CustomAccessDenied.aspx");   webApp.Update(); } Similarly, you can use  SPCustomPage.Confirmation, SPCustomPage.Error, SPCustomPage.Login, SPCustomPage.RequestAccess, SPCustomPage.Signout and SPCustomPage.WebDeleted to override these pages. To  reset the mapping, set the Target value to Null like webApp.UpdateMappedPage(SPWebApplication.SPCustomPage.AccessDenied, null);webApp.Update();One restricted to location in the /_layouts folder. When updating the mapped page, the URL has to start with “/_layouts/”. Ref: http://msdn.microsoft.com/en-us/library/gg512103.aspx#bk_spcustapp http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.administration.spwebapplication.updatemappedpage.aspx

    Read the article

  • Android&ndash;Finding your SDK debug certificate MD5 fingerprint using Keytool

    - by Bill Osuch
    I recently upgraded to a new development machine, which means the certificate used to sign my applications during debug changed. Under most circumstances you’ll never notice a difference, but if you’re developing apps using Google’s Maps API you’ll find that your old API key no longer works with the new certificate fingerprint. Google's instructions walk you through retrieving the MD5 fingerprint of your SDK debug certificate - the certificate that you’re probably signing your apps with before publishing, but it doesn't talk much about the Keytool command. The thing to remember is that Keytool is part of Java, not the Android SDK, so you'll never find it searching through your Android and Eclipse directories. Mine is located in C:\Program Files\Java\jdk1.7.0_02\bin so you should find yours somewhere similar. From a command prompt, navigate to this directory and type: keytool -v -list -keystore "C:/Documents and Settings/<user name>/.android/debug.keystore" That’s assuming the path to your debug certificate is in the typical location. If this doesn’t work, you can find out where it’s located in Eclipse by clicking Window –> Preferences –> Android –> Build. There's no need to use the additional commands shown on Google's page. You'll be prompted for a password, just hit enter. The last line shown, Certificate fingerprint, is the key you'll give Google to generate your new Maps API key. Technorati Tags: Android Mapping

    Read the article

  • Similar domains using my business' content, and stealing SEO results

    - by Murciano
    I've been hired to create a website for a restaurant in my city, let's call it "Flying Dragon" Chinese restaurant. The restaurant has never had a website, though the business itself is about ten years old. However, if you Google the restaurant's name, the first site that comes up seems to be affiliated with the restaurant itself, even though it is not. This site - let's say, flyingdragonchinese.com - is also the one that Google has apparently selected, in its results, to be the official website of the restaurant - in essence, the first Google result is flyingdragonchinese.com, and directly beneath it, within the same entry, are the Google reviews and contact information. Upon visiting flyingdragonchinese.com (again, not the actual name), I see that the website has taken the menu content from the restaurant, in the same manner that Yelp does, but it also seems (to the untrained eye) to be the restaurant's official site. Basically, someone has created a fake website for the business (I am not sure why) using its actual menu and contact information, and is hogging the search results. The concept is similar to a "scraping site" except that the information seems to have been stolen manually. The main problem is that visitors to this site will have an inaccurate impression of the restaurant. I feel like the obvious solution is to register a new domain for my site, and simply beat out this competitor (or whatever it is) with smarter SEO and business verification with Google. However, the Conan-the-Barbarian-web-designer part of me wants to somehow bash this other site (deservedly?) into oblivion. But I don't know what I can really do, besides maybe issuing a cease-and-desist letter, or trying to contact the web host for the site, although there is no contact information available on this "fake" site for the site owner. Has anyone ever experienced something like this? Is there any solution?

    Read the article

  • github team workflow - to fork or not?

    - by aporat
    We're a small team of web developers currently using subversion but soon we're making a switch to github. I'm looking at different types of github workflows, and we're not sure if the whole forking concept in github for each developer is such a good idea for us. If we use forks, I understand each developer will have his own private remote & local repositories. I'm worried it will make pushing changesets hard and too complex. Also, my biggest concern is that it will force each developer to have 2 remotes: origin (which is the remote fork) and an upstream (which is used to "sync" changes from the main repository). Not sure if it's such a easy way to do things. This is similar to the workflow explained here: https://github.com/usm-data-analysis/usm-data-analysis.github.com/wiki/Git-workflow If we don't use forks, we can probably get by fine by using a central repo creating a branch for each task we're working on, and merge them into the development branch on the same repository. It means we won't be able to restrict merging of branches and might be a little messy to have many branches on the central repository. Any suggestions from teams who tried both workflow?

    Read the article

  • How to handle multiple effect files in XNA

    - by Adam 'Pi' Burch
    So I'm using ModelMesh and it's built in Effects parameter to draw a mesh with some shaders I'm playing with. I have a simple GUI that lets me change these parameters to my heart's desire. My question is, how do I handle shaders that have unique parameters? For example, I want a 'shiny' parameter that affects shaders with Phong-type specular components, but for an environment mapping shader such a parameter doesn't make a lot of sense. How I have it right now is that every time I call the ModelMesh's Draw() function, I set all the Effect parameters as so foreach (ModelMesh m in model.Meshes) { if (isDrawBunny == true)//Slightly change the way the world matrix is calculated if using the bunny object, since it is not quite centered in object space { world = boneTransforms[m.ParentBone.Index] * Matrix.CreateScale(scale) * rotation * Matrix.CreateTranslation(position + bunnyPositionTransform); } else //If not rendering the bunny, draw normally { world = boneTransforms[m.ParentBone.Index] * Matrix.CreateScale(scale) * rotation * Matrix.CreateTranslation(position); } foreach (Effect e in m.Effects) { Matrix ViewProjection = camera.ViewMatrix * camera.ProjectionMatrix; e.Parameters["ViewProjection"].SetValue(ViewProjection); e.Parameters["World"].SetValue(world); e.Parameters["diffuseLightPosition"].SetValue(lightPositionW); e.Parameters["CameraPosition"].SetValue(camera.Position); e.Parameters["LightColor"].SetValue(lightColor); e.Parameters["MaterialColor"].SetValue(materialColor); e.Parameters["shininess"].SetValue(shininess); //e.Parameters //e.Parameters["normal"] } m.Draw(); Note the prescience of the example! The solutions I've thought of involve preloading all the shaders, and updating the unique parameters as needed. So my question is, is there a best practice I'm missing here? Is there a way to pull the parameters a given Effect needs from that Effect? Thank you all for your time!

    Read the article

  • How is"cloud computing"different from "client-server"?

    - by BellevueBob
    Watching a CEO for a new "cloud computing" company describe his company on a finance TV program today, he said something like "Cloud computing is superior to old-fashioned client-server computing". Now I'm confused. Can someone please explain what "cloud computing" means in contrast to client-server? As far as I understand it, cloud computing is more of a network services model, such that I do not own or maintain the physical hardware. The "cloud" is all the back-end stuff. But I still might have an application that communicates with that "cloud" environment. And if I run a web site presents a form that a user fills out, pushes a button on the page, and returns some report that was generated by the web server, isn't that the same as "cloud" computing? And would you not consider my web browser as the "client"? Please note my question is specific to the concept of "cloud computing" with respect to "client-server". Sorry if this is an inappropriate question for this site; it's the one closest in the Stack universe and this is my first time here. I'm an old timer, programming since mainframe days in the late 70's.

    Read the article

  • Why is NDA so hard to understand?

    - by Dave Campbell
    Maybe this concept is simpler for me because of all the jobs I've been on over the years requiring security clearances. I've signed quite a few NDA forms. Some for big companies, some for small, but the meaning of "NDA" remains constant: Non-Disclosure Agreement. To me, that takes no further explanation, but apparently it's confusing to some people, and I don't understand how you can be confused. The papers I signed with the U.S. Army in 1970 read "10 years and $10,000" for a violation... can't imagine what it's up to now, but THAT is a strict NDA :) So those things I've been told, I cannot talk about, period. Even if the entire world knows about them, I cannot speak about them until the information goes off NDA. An example was a Silverlight release a while back. It might have been Silverlight 3, I don't remember. Everyone was anxiously awaiting the release so they could post their material. Of course the entire world knew it was coming out and imminently so. Some enterprising folks had even found the bits on a server before the official announcement. So then the situation became: everyone knew about it, some were even coding with it and blogging about it and yet we couldn't talk about it. Scott Guthrie's posting about it opened the flood gates and then it went off NDA, but up until that moment, we were locked. Sitting out on the edge you're uninstalling and re-installing all the time and you get frustrated when things that used to work don't, but hey... those bits were still warm when you got 'em, and that's the fun. But that fun comes at a price, and the price is the NDA. Awkward yes, confusing no... See you at MIX10, and Stay in the 'Light! MIX10

    Read the article

  • Implmenting RLE into a tilemap or how to create a large 3D array?

    - by Smallbro
    Currently I've been using a 3D array for my tiles in a 2D world but the 3D side comes in when moving down into caves and whatnot. Now this is not memory efficient and I switched over to a 2D array and can now have much larger maps. The only issue I'm having now is that it seems that my tiles cannot occupy the same space as a tile on the same z level. My current structure means that each block has its own z variable. This is what it used to look like: map.blockData[x][y][z] = new Block(); however now it works like this map.blockData[x][y] = new Block(z); I'm not sure why but if I decide to use the same space on say the floor below it wont allow me to. Does anyone have any ideas on how I can add a z-axis to my 2D array? I'm using java but I reckon the concept carries across different languages. Edit: As Will posted, RLE sounds like the best method for achieving a fast 3D array. However I'm struggling to understand how I would even start to implement it? Would I create a 4D array the 4th being something which controls how many to skip? Or would the x-axis simply change altogether and have large gaps in between - for example [5][y][z] would skip 5 tiles? Is there something really obvious here which I am missing? The number of z levels I'm trying to have is around 66, it would be preferably that I can have up to or more than 1000 in x and y.

    Read the article

  • AutoVue 20.2 for Agile Released

    - by Angus Graham
    Normal 0 false false false EN-CA X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Oracle’s AutoVue 20.2 for Agile PLM is now available on Oracle’s Software Delivery Cloud. This latest release allows Agile PLM customers to take advantage of new AutoVue 20.2 features in the following Agile PLM environments: 9.3.1.x; 9.3.0.  AutoVue 20.2 delivers improvements in the following areas. New Format Support: AutoVue 20.2 adds support for the latest versions of popular file formats including: ECAD: Cadence Concept HDL 16.5, Allegro Layout 16.5, Orcad Capture 16.5, Board Station ASCII Symbol Geometry, Cadence Cell Library MCAD: CATIA V5 R21, PTC Creo Parametric 1.0, Creo Element\Direct Modeling 17.10, 17.20, 17.25, 17.30, 18.00, SolidWorks 2012, SolidEdge ST3 & ST4, PLM XML 2D CAD: Creo Element/Direct Drafting 17.10 to 18.00 Office: MS Office 2010: Word, Excel, PowerPoint, Outlook Enhancements to AutoVue enterprise readiness: reliability and performance improvements, as well as security enhancements which adhere to Oracle’s Software Security Assurance standards Updated version of AutoVue Document Print Service offerings, which include the ability to select CAD layers for printing  For further details, check out the What’s New in AutoVue 20.2 datasheet

    Read the article

  • Questions about Code Reviews

    - by bamboocha
    My team plans to do Code Review and asked me to make a concept what and how we are going to make our Code Reviews. We are a little group of 6 team members. We use an SVN repository and write programs in different languages (mostly: VB.NET, Java, C#), but the reviews should be also possible for others, yet not defined. Basically I am asking you, how are you doing it, to be more precise I made a list of some questions I got: 1. Peer Meetings vs Ticket System? Would you tend to do meetings with all members, rather than something like a ticket system, where the developer can add a new code change and some or all need to check and approve it? 1. What tool? I made some researches on my own and it showed that Rietveld seems to be the program to use for non-git solutions. Do you agree/disagree and why? 2. A good workflow to follow? 3. Are there good ways to minimize the effort for those meetings even more? 4. What are good questions, every code reviewer should follow? I already made a list with some questions, what would you append/remove? are there any magic numbers in the code? do all variable and method names make sense and are easily understandable? are all querys using prepared statement? are all objects disposed/closed when they are not needed anymore? 5. What are your general experiences with it? What's important? Things to consider/prevent/watch out?

    Read the article

  • How to Rotate different data in days of the week in php [migrated]

    - by shihon
    I am working on a project in which i have to distribute different ad's per day, the ad's in form of array are: $ad = array( 'attribute1_value' => "12", 'attribute2_value' => "xyz", 'attribute3_value' => 'http://example.com', 'attribute4_value' => 'data'); The logic i am using with switch case : $day = date('w',time()); switch ($day) { case '0': if($day == '0') { $count = 0; echo $ad; $count++; } else { $count = 7; echo $ad; } break; case '1': if($day == '1') { $count = 1; echo $ad; $count++; } else { $count = 8; echo $ad; } break; Problem is if i have ~15 ad's then i want to distribute ad/day, date('w') output's the present day but after day 7 i.e saturday, on sunday ad number 8 initiate. I have to implement this scenario using date function. Also i have to send ad's to those user who are not experience this ad before. I am not expert in php, as a beginner working in php/mysql. Kindly help me to improve this concept

    Read the article

  • Can I remove the systems from a component entity system?

    - by nathan
    After reading a lot about entity/component based engines. I feel like there is no real definition for this kind of engine. Reading this thread: Implementing features in an Entity System and the linked article made me think a lot. I did not feel that comfortable using System concept so I'll write something else, inspired by this pattern. I'd like to know if you think it's a good way to organize game code and what improvements can be made. Regarding a more strict implementation of entity/component based engine, is my solution viable? Do I risk getting stuck at any point due to the lack of flexibility of this implementation (or anything else)? My engine, as for entity/component patterns has entities and components, no systems since the game logic is handled by components. Also, I think the main difference is the fact that my engine will use inherence and OOP concepts in general, I mean, I don't try to minimize them. Entity: an entity is an abstract class. It holds his position, width and height, scale and a list of linked components. The current implementation can be found here (java). Every frame, the entity will be updated (i.e all the components linked to this entity will be updated), and rendered, if a render component is specified. Component: like for entity, a component is an abstract class that must be extended to create new components. The behavior of an entity is created through his components collection. The component implementation can be found here. Components are updated when the owning entity is updated or for only one specific component (render component), rendered. Here is an example of a logic component (i.e not a renderable component, a component that's updated each frame) in charge of listening for keyboard events and a render component in charge of display a plain sprite (i.e not animated).

    Read the article

  • What triggered the popularity of lambda functions in modern mainstream programming languages?

    - by Giorgio
    In the last few years anonymous functions (AKA lambda functions) have become a very popular language construct and almost every major / mainstream programming language has introduced them or is planned to introduce them in an upcoming revision of the standard. Yet, anonymous functions are a very old and very well-known concept in Mathematics and Computer Science (invented by the mathematician Alonzo Church around 1936, and used by the Lisp programming language since 1958, see e.g. here). So why didn't today's mainstream programming languages (many of which originated 15 to 20 years ago) support lambda functions from the very beginning and only introduced them later? And what triggered the massive adoption of anonymous functions in the last few years? Is there some specific event, new requirement or programming technique that started this phenomenon? IMPORTANT NOTE The focus of this question is the introduction of anonymous functions in modern, main-stream (and therefore, maybe with a few exceptions, non functional) languages. Also, note that anonymous functions (blocks) are present in Smalltalk, which is not a functional language, and that normal named functions have been present even in procedural languages like C and Pascal for a long time. Please do not overgeneralize your answers by speaking about "the adoption of the functional paradigm and its benefits", because this is not the topic of the question.

    Read the article

  • Drivers for Ubuntu 13.10 [on hold]

    - by Fernando De Souza Martins
    I just installed Ubuntu 13.10, my screen resolution is not fitting my screen as the ubuntu interface is all around stretching over the screen, so i thought i might install nvidia's driver that i know can let me adjust the exact resolution i need. So i began a 2 hour quest, i downloaded the driver hoping i would have a wizard to instal it, but yeah, so i tried to do a bit of research and i found that feature, i think its called in english additional drivers, but it wont show the nvidia drivers, i tried the terminal, but once i write the commands i found it asks for a password but i cant type anything once the password is asked. So, my question, obviously, how do i install this driver? I am not sure if this is appropriate, but why doesnt ubuntu have a wizard to install things? I feel like im working for the OS, when it should be the other way around, but i love the concept of linux, so im pushing forward and trying to use it. Another thing is, i had to install a bunch of drivers and applications for the drivers in windows, do i need to install any other driver? I cant change my mouse's sensibility in the os, it seems, so how do i do it? I'm sorry i'm asking all of this, but it seems necessary.

    Read the article

  • K-12 and Cloud considerations

    - by user736511
    Much like every other Public Sector organization, school districts in the US and Canada are under tremendous pressure to deliver consistent and modern services while operating with reduced budgets, IT personnel shortages, and staff attrition.  Electronic/remote learning and the need for immediate access to resources such as grades, calendars, curricula etc. are straining IT environments that were already burdened with meeting privacy requirements imposed by both regulators and parents/students.  One area viewed as a solution to at least some of the challenges is the use of "Cloud" in education.  Although the concept of "Cloud" is nothing new in education with many providers supplying educational material over the web, school districts defer previously-in-house-hosted services to established commercial vendors to accommodate document sharing, app hosting, and even e-mail.  Doing so, however, does not reduce an important risk, that of privacy.  As always, Cloud implementations are viewed in a skeptical manner because of the perceived reduction in sensitive data management and protection thereof, although with a careful approach and the right tooling, the benefits realized by Clouds can expand to security and privacy.   Oracle's comprehensive approach to data privacy and identity management ensures that the necessary tools are available to support regulations, operational efficiencies and strong security regardless of where the sensitive data is stored - on premise or a Cloud.  Common management tools, role-based access controls, access policy management and engineered systems provided by Oracle can be the foundational pieces on which school districts can build their Cloud implementations without having to worry about security itself. Their biggest challenge, and it is a positive one, is how to best take advantage of Oracle's DB Security and IDM functionality to reduce operational costs while enabling modern applications and data delivery to those who needs access to it. For more information please refer to http://www.oracle.com/us/products/middleware/identity-management/overview/index.html and http://www.oracle.com/us/products/database/security/overview/index.html.

    Read the article

  • Deferred Rendering With Diffuse,Specular, and Normal maps

    - by John
    I have been reading up on deferred rendering and I am trying to implement a renderer using the Sponza atrium model, which can be found here, as my sandbox.Note I am also using OpenGL 3.3 and GLSL. I am loading the model from a Wavefront OBJ file using Assimp. I extract all geometry information including tangents and bitangents. For all the aiMaterials,I extract the following information which essentially comes from the sponza.mtl file. Ambient/Diffuse/Specular/Emissive Reflectivity Coefficients(Ka,Kd,Ks,Ke) Shininess Diffuse Map Specular Map Normal Map I understand that I must render vertex attributes such as position ,normals,texture coordinates to textures as well as depth for the second render pass. A lot of resources mention putting colour information into a g-buffer in the initial render pass but do you not require the diffuse,specular and normal maps and therefore lights to determine the fragment colour? I know that doesnt make since sense because lighting should be done in the second render pass. In terms of normal mapping, do you essentially just pass the tangent,bitangents, and normals into g-buffers and then construct the tangent matrix and apply it to the sampled normal from the normal map. Ultimately, I would like to know how to incorporate this material information into my deferred renderer.

    Read the article

  • Agile project management, agile development: early integration

    - by Matías Fidemraizer
    I believe that agile works if everything is agile. In software development area, in my opinion, if team members' code is integrated early, code will be more in sync and this has a lot of pros: Early integration helps team members to avoid painful merges. Encourages better coding habits, because everyone makes sure that they don't break co-workers' code everyday. Both developers and architects (code reviewers) may detect bad design decisions or just wrong development directions in real-time, preventing useless work. Actually I'm talking about getting the latest version of code base and checking-in your own code to the source control in a daily basis. When you start your coding day (i.e. you arrive to your work), your first action is updating your code base with the latest version from the source control. In the other hand, when you're about an hour to leave from your work and go home, your last action is checking-in your code to the source control and be sure that your day work doesn't break the project's build process. Rather than updating and checking-in your code once you finished an entire task, I believe the best approach is fixing small and flexible personal milestones and checking-in the code once you finish one of these. I really believe that this coding approach fits better in the agile project management concept. Do you know some document, blog post, wiki, article or whatever that you can suggest me that could be in sync with my opinion?. And, do you find any problem working with this approach?. Thank you in advance.

    Read the article

  • Update Since Microsoft/PSC Office Open XML Case Study

    - by Tim Murphy
    In 2009 Microsoft released a case study about a project that we had done using the OOXML SDK 1.0 for Research Directors Inc.  Since that time Microsoft has released version 2.0 of the SDK and PSC has done significant development with it.  Below are some of the mile stones we have reached since the original case study. At the time of the original case study two report types had been automated to output as PowerPoint presentations.  Now that the all the main products have been delivered we have added three reports with Word document outputs and five more reports with PowerPoint outputs. One improvement we made over the original application was to create a PowerPoint Add-In which allows the users to tag a slide.  These tags along with the strongly typed SDK 2.0 allows for the code to use LINQ to easily search for slides in the template files.  This allows for a more flexible architecture base on assembling a presentation from copied slide extracted from the template. The new library we created also enabled us to create two new Word based reports in two weeks.  The library we created abstracts the generation of the documents from the business logic and the data retrieval.  The key to this is the mark up.  Content Controls are a good method for identifying sections of a template to be modified or replaced.  Join this with the concept of all data being generically either scalar or two dimensional and the code becomes more generic. In the end we found the OOXML SDK 2.0 to be a great tool for accelerating document generation development and creating happy clients.  del.icio.us Tags: PSC Group,OOXML,Case Study,Office Open XML,Word,PowerPoint

    Read the article

  • looking for a short explanation of fuzzy logic

    - by user613326
    Well i got the idea that basics of fuzzy logic are not that hard to grasp. And i got the feeling that someone might explain it to me in like 30 minutes. Just like i understand neural networks and am able to re-create the famous Xor problem. And go just beyond it and create 3 layer networks of x nodes. I'd like to understand fuzzy till a similar usefully level, in c# language. However the problem is face, I'd like to get concept right however i see many websites who include lots of errors in their basic explaining. Like for example showing pictures and use different numbers as shown in pictures to calculate, as if lots of people just copied stuff without noticing what they write down. While others for me go to deep in their math notation) To me that's very annoying to learn from. For me there is no need to re-invent wheel; Aforge already got a fuzzy logic framework. So what i am looking for are some good examples, good examples like how the neural XOR problem is solved. Is there anyone such a instructional resource out there; do you know a web page, or YouTube where it is shortly explained, what would you recommend me ? Note this article comes close; but it just doesnt nail it for me. After that i downloaded a bunch of free PDF's but most are academic and hard to read for me (i'm not English and dont have a special math degree). (i've been looking around a lot for this, good starter material about it is hard to find).

    Read the article

  • Metaphor for task synchronization [closed]

    - by nkint
    I'm looking for a metaphor. A friend of mine taught me to use metaphors from nature, everyday life, math, and use them to design my projects. They can help in creating a better design or better understanding or the problem, and they are cool. Now I'm working on a project with hardware and micro-controllers in C. For convenience, I have decided to use multiple micro-controllers as co-processor units for real-time (the slaves) and a master. This has saved me a lot of headache: I can code the main logic in the master without paying too much attention to super optimizing everything; I don't care if I need some blocking-call; I don't worry about serial communication with the computer. I just send messages to the slaves and they are super fast super in real time. I like my design and it seems to work well. So here are the important concepts that I'm trying capture in the metaphor: hierarchy of processing Not using one big brain but rather several small, distributed brain units using distributed power or resources I'm looking for a good metaphor for this concept of having one unit synchronize the work of all the others. Preferably, the metaphor would come from nature, biology, or zoology.

    Read the article

  • C++ Numerical Recipes &ndash; A New Adventure!

    - by JoshReuben
    I am about to embark on a great journey – over the next 6 weeks I plan to read through C++ Numerical Recipes 3rd edition http://amzn.to/YtdpkS I'll be reading this with an eye to C++ AMP, thinking about implementing the suitable subset (non-recursive, additive, commutative) to run on the GPU. APIs supporting HPC, GPGPU or MapReduce are all useful – providing you have the ability to choose the correct algorithm to leverage on them. I really think this is the most fascinating area of programming – a lot more exciting than LOB CRUD !!! When you think about it , everything is a function – we categorize & we extrapolate. As abstractions get higher & less leaky, sooner or later information systems programming will become a non-programmer task – you will be using WYSIWYG designers to build: GUIs MVVM service mapping & virtualization workflows ORM Entity relations In the data source SharePoint / LightSwitch are not there yet, but every iteration gets closer. For information workers, managed code is a race to the bottom. As MS futures are a bit shaky right now, the provider agnostic nature & higher barriers of entry of both C++ & Numerical Analysis seem like a rational choice to me. Its also fascinating – stepping outside the box. This is not the first time I've delved into numerical analysis. 6 months ago I read Numerical methods with Applications, which can be found for free online: http://nm.mathforcollege.com/ 2 years ago I learned the .NET Extreme Optimization library www.extremeoptimization.com – not bad 2.5 years ago I read Schaums Numerical Analysis book http://amzn.to/V5yuLI - not an easy read, as topics jump back & forth across chapters: 3 years ago I read Practical Numerical Methods with C# http://amzn.to/V5yCL9 (which is a toy learning language for this kind of stuff) I also read through AI a Modern Approach 3rd edition END to END http://amzn.to/V5yQSp - this took me a few years but was the most rewarding experience. I'll post progress updates – see you on the other side !

    Read the article

  • ODI 11g – Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • SOA Forcing A Shift In IT Governance

    As more and more companies adopt a service oriented approach to developing and maintaining existing enterprise systems, IT governance also needs to shift its philosophies to fit the emerging development paradigm. When I first started programming companies placed an emphasis on “Code and Go” software development style. They only developed for current problems and did not really take a look at how the company could leverage some of the code we were developing across the entire enterprise system.  The concept of Service Oriented Architecture (SOA) has dramatically shifted how we develop enterprise software with emphasizing software processes as company assets. This has driven some to start developing new components as processes strictly for the possibility of future integration of existing and new systems. I personally like this new paradigm because it truly promotes code reusability. However, most enterprise level IT governance polices were created prior to the introduction of SOA in their respected organization. This can create a sense of the Wild West for developers working on projects related to SOA. This is due to the fact that a lot of the standards and polices implemented by enterprise IT governing boards were initially for developing under the “Code and Go” paradigm and do not take in to account idiosyncrasies found in the SOA/integration based development. As IT governance moves forward its focus should aim more for “Develop to Integrate” versus “Code and Go” philosophies. Examples of “Develop to Integrate” Philosophy: Defining preferred data transfer methodologies (XML vs. JSON), and when to use them Updating security best practices for exposing public services based on existing standard security policies Define when to use create new SOA project vs. implementing localized components that could be reused elsewhere in the enterprise.

    Read the article

  • How to Generate XBRL from Oracle's Applications

    - by Theresa Hickman
    I've been getting quite a few emails asking how Oracle supports XBRL. As of June 2011, all public US companies were required to produce XBRL-based financial statements to the SEC. The latest XBRL 2.1 specifications are supported by Oracle Hyperion Disclosure Management, which supports the XBRL tagging of financial statements as well as the disclosures and footnotes within your 10K and 10Q filings. Because many of our customers use Hyperion Financial Management (HFM) for their consolidation needs, they simply generate XBRL statements from their consolidated financial results. Click here to watch a 3 min demo about this cool tool. Question: What if you don't use Hyperion Financial Management, and you only use E-Business Suite General Ledger or PeopleSoft General Ledger? Answer: No problem, all you need is Hyperion Disclosure Management to generate XBRL from your general ledger. Here are the steps: Upload the XBRL taxonomy from the SEC or XBRL website into Hyperion Disclosure Management. Publish your financial statements out of general ledger to Excel. Perform the XBRL tag mapping from the Excel output to Hyperion Disclosure Management.

    Read the article

  • "Siebel2FusionCRM Integration" solution by ec4u (D)

    - by Richard Lefebvre
    ec4u, a CRM System Integration leader based in Germany and Switzerland, and an historical Oracle/Siebel partner, offers a complete "Siebel2FusionCRM Integration" solution, based on tools methodology and services. ec4u Siebel2FusionCRM Integration solution's main objectives are: Integration between Siebel (on-premise) and Fusion CRM / Marketing (“in the cloud”) Accounts, Contacts and Addresses are maintained by Sales in Siebel CRM and synchronized in real-time into Fusion CRM / Marketing CDM Processing ensures clean data for marketing campaigns (validation and deduplication) Create E-Mail marketing campaigns and newsletters in Fusion The solution features: Upsert processes figure out what information needs to be updated, inserted or terminated (deleted). However, as Siebel is the data master, it is still a one-way synchronization. Handle deleted or nullified information by terminating them in Fusion CRM (set start and end date to define the validity period) Initial load and real-time synchronization use the same processes Invocations/Operations can be repeated due to no transactional support from Fusion web services Tagging sub entries in case of 1 to N mapping (Example: Telephone number is one simple field in Siebel but in Fusion you can have multiple telephone numbers in a sub table) E-Mail-Notification in case of any error (containing error message, instance number, detailed payload) Schematron Validation Interested? Looking for more details or a partnership with ec4u for a "Siebel2FusionCRM Integration" project? Contact: Gregor Bublitz, Director Expert Services ([email protected])

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >