Search Results

Search found 18329 results on 734 pages for 'interpret order'.

Page 231/734 | < Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >

  • Web Service - SOAP

    - by seth
    My experience with web services is slim and I'm trying to understand this a little bit more. I have done for instance a web service using visual studio. In order to use it, I add a web service reference in my projects and this creates a proxy and the use is pretty simple. Does this use SOAP? I ask this because i will be now facing a web service that i must communicate using SOAP with attachements and i'm trying to understand the concept behind this and the difference to what i have done so far. Will the proxy still be viable or do i need to create the XML by hand and post it to the web service? This concepts still confuse and so any help is appreciated. EDIT: I'm not developing the service, i will be just using it.

    Read the article

  • Alpha blend 3D png texture in XNA

    - by ProgrammerAtWork
    I'm trying to draw a partly transparent texture a plane, but the problem is that it's incorrectly displaying what is behind that texture. Pseudo code: vertices1 basiceffect1 // The vertices of vertices1 are located BEHIND vertices2 vertices2 basiceffect2 // The vertices of vertices2 are located IN FRONT vertices1 GraphicsDevice.Clear(Blue); PrimitiveBatch.Begin(); //if I draw like this: PrimitiveBatch.Draw(vertices1, trianglestrip, basiceffect1) PrimitiveBatch.Draw(vertices2, trianglestrip, basiceffect2) //Everything gets draw correctly, I can see the texture of vertices2 trough //the transparent parts of vertices1 //but if I draw like this: PrimitiveBatch.Draw(vertices2, trianglestrip, basiceffect2) PrimitiveBatch.Draw(vertices1, trianglestrip, basiceffect1) //I cannot see the texture of vertices1 in behind the texture of vertices2 //Instead, the texture vertices2 gets drawn, and the transparent parts are blue //The clear color PrimitiveBatch.Draw(vertice PrimitiveBatch.End(); My question is, Why does the order in which I call draw matter?

    Read the article

  • Steady zoom on center in LWJGL (Modelview)

    - by l5p4ngl312
    I am having a problem in LWJGL with zooming in and out. I am using glScaled(zoom, zoom, 1) before glTranslated. There are 2 problems: 1. The rate of zoom speeds up a lot when zooming out (lower zoom value). 2. It zooms in on the bottom left corner of the screen rather than the center. Eventually, I would like to have the zoom focused on the mouse position. I have tried to fix these problems by make it glScaled(zoom^12, zoom^12, 1) so that the greater the zoom value, the faster it will zoom in order to balance out the faster zoom at lower zoom values. To compensate for the zoom focused on the bottom left, I have tried to subtract (zoom+1)^10 + 2^10 from the X and Y of each sprite. This results in a curved zoom path, first to the left and then to the right. It is a 2D game.

    Read the article

  • Anti-aliasing changed after update (or something...)

    - by Mussnoon
    The anti-aliasing of my system (of GTK?) has gone weird after I did one of two things - do a system update, and install gimp 2.7 beta. See images: Before: After: Before: After: Here's the current rendering comparison between Chromium, Firefox and Opera (in that order): Does anyone know how I can get the old anti-aliasing back? As far as I can tell, I never did anything special to achieve that before. It has always been on the default settings since I installed Lucid few months ago. Update: I have tried different settings (even though I knew they were already at the "best" settings) in appearance fonts ( details) but, as expected, any change there only makes things worse.

    Read the article

  • When should we use weak entities when modelling a database?

    - by Songo
    This is basically a question about what are weak entities? When should we use them? How should they be modeled? What is the main difference between normal entities and weak entities? Does weak entities correspond to value objects when doing Domain Driven Design? To help keep the question on topic here is an example taken from Wikipedia that people can use to answer these question: In this example OrderItem was modeled as a weak entity, but I can't understand why it can't be modeled as a normal entity. Another question is what if I want to track the order history (i.e. the changes in it status) would that be a normal or weak entity?

    Read the article

  • How to label a cuboid?

    - by usha
    Hi this is how my 3dcuboid looks, I have attached the complete code. I want to label this cuboid using different names across sides, how is this possible using opengl on android? public class MyGLRenderer implements Renderer { Context context; Cuboid rect; private float mCubeRotation; // private static float angleCube = 0; // Rotational angle in degree for cube (NEW) // private static float speedCube = -1.5f; // Rotational speed for cube (NEW) public MyGLRenderer(Context context) { rect = new Cuboid(); this.context = context; } public void onDrawFrame(GL10 gl) { // TODO Auto-generated method stub gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glLoadIdentity(); // Reset the model-view matrix gl.glTranslatef(0.2f, 0.0f, -8.0f); // Translate right and into the screen gl.glScalef(0.8f, 0.8f, 0.8f); // Scale down (NEW) gl.glRotatef(mCubeRotation, 1.0f, 1.0f, 1.0f); // gl.glRotatef(angleCube, 1.0f, 1.0f, 1.0f); // rotate about the axis (1,1,1) (NEW) rect.draw(gl); mCubeRotation -= 0.15f; //angleCube += speedCube; } public void onSurfaceChanged(GL10 gl, int width, int height) { // TODO Auto-generated method stub if (height == 0) height = 1; // To prevent divide by zero float aspect = (float)width / height; // Set the viewport (display area) to cover the entire window gl.glViewport(0, 0, width, height); // Setup perspective projection, with aspect ratio matches viewport gl.glMatrixMode(GL10.GL_PROJECTION); // Select projection matrix gl.glLoadIdentity(); // Reset projection matrix // Use perspective projection GLU.gluPerspective(gl, 45, aspect, 0.1f, 100.f); gl.glMatrixMode(GL10.GL_MODELVIEW); // Select model-view matrix gl.glLoadIdentity(); // Reset } public void onSurfaceCreated(GL10 gl, EGLConfig config) { // TODO Auto-generated method stub gl.glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // Set color's clear-value to black gl.glClearDepthf(1.0f); // Set depth's clear-value to farthest gl.glEnable(GL10.GL_DEPTH_TEST); // Enables depth-buffer for hidden surface removal gl.glDepthFunc(GL10.GL_LEQUAL); // The type of depth testing to do gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); // nice perspective view gl.glShadeModel(GL10.GL_SMOOTH); // Enable smooth shading of color gl.glDisable(GL10.GL_DITHER); // Disable dithering for better performance }} public class Cuboid{ private FloatBuffer mVertexBuffer; private FloatBuffer mColorBuffer; private ByteBuffer mIndexBuffer; private float vertices[] = { //width,height,depth -2.5f, -1.0f, -1.0f, 1.0f, -1.0f, -1.0f, 1.0f, 1.0f, -1.0f, -2.5f, 1.0f, -1.0f, -2.5f, -1.0f, 1.0f, 1.0f, -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, -2.5f, 1.0f, 1.0f }; private float colors[] = { // R,G,B,A COLOR 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f }; private byte indices[] = { // VERTEX 0,1,2,3,4,5,6,7 REPRESENTATION FOR FACES 0, 4, 5, 0, 5, 1, 1, 5, 6, 1, 6, 2, 2, 6, 7, 2, 7, 3, 3, 7, 4, 3, 4, 0, 4, 7, 6, 4, 6, 5, 3, 0, 1, 3, 1, 2 }; public Cuboid() { ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4); byteBuf.order(ByteOrder.nativeOrder()); mVertexBuffer = byteBuf.asFloatBuffer(); mVertexBuffer.put(vertices); mVertexBuffer.position(0); byteBuf = ByteBuffer.allocateDirect(colors.length * 4); byteBuf.order(ByteOrder.nativeOrder()); mColorBuffer = byteBuf.asFloatBuffer(); mColorBuffer.put(colors); mColorBuffer.position(0); mIndexBuffer = ByteBuffer.allocateDirect(indices.length); mIndexBuffer.put(indices); mIndexBuffer.position(0); } public void draw(GL10 gl) { gl.glFrontFace(GL10.GL_CW); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVertexBuffer); gl.glColorPointer(4, GL10.GL_FLOAT, 0, mColorBuffer); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_COLOR_ARRAY); gl.glDrawElements(GL10.GL_TRIANGLES, 36, GL10.GL_UNSIGNED_BYTE, mIndexBuffer); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_COLOR_ARRAY); } } public class Draw3drect extends Activity { private GLSurfaceView glView; // Use GLSurfaceView // Call back when the activity is started, to initialize the view @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); glView = new GLSurfaceView(this); // Allocate a GLSurfaceView glView.setRenderer(new MyGLRenderer(this)); // Use a custom renderer this.setContentView(glView); // This activity sets to GLSurfaceView } // Call back when the activity is going into the background @Override protected void onPause() { super.onPause(); glView.onPause(); } // Call back after onPause() @Override protected void onResume() { super.onResume(); glView.onResume(); } }

    Read the article

  • Saving and Loading the Game (Automatically or Manually) via Internal Storage Only (Tablet PC Issues)

    - by David Dimalanta
    Here is my question. When making a game app for Android, I considered first the device. It's no problem to save progress everything (from levels to records) on a smartphone because it has an SD Card slot. Exception to this, the tablet PC, it can really nothing but on internal only storage. For example, I'm using this tutorial for audio spectrum (see http://www.youtube.com/watch?v=5cN1VzZXcdo) that involves copying from internal to external in order to detect frequency. It works on the desktop but not on the Android device (Tablets only [i.e. Google Nexus Tablet]). Is there a way to optimize save/load game problems due to internal/external device issues? Plus, additionally, what's the reason why my device won't work on tablets, except the desktop, while testing the audio spectrum code and why? Also, is it the same with saving/loading game?

    Read the article

  • Branching and Merging Improvements in TFS2010

    - by jehan
    Introducing the concept of “first class branches” is a significant improvement as part of the 2010 release with respect to version control.  Not only does it help to distinguish between folders and branches, but it enables branch visualizations. Let us see improvements in detail. ·         In TFS2008, you don’t know which of the folders are Branches: All folders looks the same, all have the folder icon. Now, In TFS 2010 there is a new icon that shows which of the folder is a Branch.       ·      There is no visual means to manage branches in TFS2008:   You dont have any means to identify which branches are related and the relation type. Now, In TFS 2010 you have visual tools to see the Branches Hierarchy. In order to see a Branch Hierarchy just Right Click the Branch and choose: Branching and Merging –> View Hierarchy     ·         In TFS2008, there is no option to track changes path between the Branches:  If you have made a merge in a Branch you can’t track from which Branch this Merge came from. Now, you have the tools that shows the path of change between the Branches, you can also see where change was added on a timeline.  In order to track a change do the following: Step1: Right click the Branch and click View History   Step 2: Choose a changeset to track and click the “Track Changeset” button.     Step 3: Choose the branches that will be in the view and click “Visualize”. In above visual, you can see that Changesets 108,109,110 and 119 where merged from Main to Release1.0 Branch and then “Release_1.0” Branched to “Dev1.0. Step4: You can also see the Merges on a Timeline by clicking on the “Timeline Tracking” button.   Creating New Branches: In TFS 2010, the creation of branches has been streamlined a bit from the process in 2008.  In 2008, creating a new branch was like every other action in the system – changes were pended on the client, and then checked in to the server. Because of this creating new branch in TFS2008 was time-consuming process.  In TFS2010, the step where changes are pended has been bypassed and now performing the branch creation is entirely on the server.  With this approach, the round trip time for downloading a copy of each file on the branch and then uploading each file again has been eliminated.  Note: In TFS2010, the new branch will be created and committed as a single operation on the server. Pending changes will not be created, it doesn’t require a check-in as it will be carried out as a single operation and it’s not possible to cancel.     Manage Branch Permissions: The properties view for branches is also different than that of ordinary folders or file, containing some metadata for the branch, relationship information, and permissions for the branch. In TFS2008, the users who have checkout and Check-in permissions can create a branch. But, In TFS2010 you can control the permissions for Branches using Manage Branch permissions.   Reparent option in TFS2010: In TFS2008, if we have two branches which don’t have parent-child relation and we want perform merge between these two branches then we have to perform baseless merge using tf.exe command line. I have two branches Release_1.0 and Dev1.0_F2 which don’t have any relation between them, that’s why when I click on merge option in Release_1.0, in Target Branch it’s not showing Dev1.0_F2 branch to perform the merges.     Let us see what can we do for this in TFS2010, first perform a TFS baseless merge to establish a relationship between the parent branch and the child branches.  It will only merge the folder, not its contents. TFS baseless merges are performed via the command line using VS2010 command prompt and do the following:   tf merge /baseless <ParentBranch> <childBranch> Check in your pending changes. It will create the link between the branches but the relationships are still not completed.  Now, select the child branch in Source Control Explorer and from the File menu choose Source Control –> Branching and Merging –> Reparent.      In the dialog box,  choose the appropriate branch as the new parent.   Click Reparent and then go to parent branch and click merge. Now, will see that in Target Branch option Dev1.0_F2 branch is added.         Converting Folders to Branches and Branches to Folders: You can convert any Folder as Branch from Context Menu by performing right click on the folderàBranching and MergingàConvert to Branch. In similar way, you can convert the Branches to Folder using Convert to Folder option available in File Menu (FileàSource ControlàBranching and MergingàConvert to Branch). This option is not available in context menu.

    Read the article

  • Only 192.168.0.3 can request, but anyone can request /public/file.html

    - by mattalexx
    I have the following virtual host on my development server: <VirtualHost *:80> ServerName example.com DocumentRoot /srv/web/example.com/pub <Directory /srv/web/example.com/pub> Order Deny,Allow Deny from all Allow from 192.168.0.3 </Directory> </VirtualHost> The Allow from 192.168.0.3 part is to only allow requests from my workstation machine. I want to tweak this to allow anyone to request a certain URL: http://example.com/public/file.html How do I change this to allow /public/file.html requests to get through from anyone? Note: /public/file.html doesn't actually exist as a file on the server. I redirect all incoming requests through a single index file using mod_rewrite.

    Read the article

  • What programming languages have you taught your children?

    - by Dubmun
    I'm a C# developer by trade but have had exposure to many languages (including Java, C++, and multiple scripting languages) over the course of my education and career. Since I code in the MS world for work I am most familiar with their stack and so I was excited when Small Basic was announced. I immediately started teaching my oldest to program in it but felt that something was missing from the experience. Being able to look up every command with the IDE's intellisense seemed to take something from the experience. Sure, it was easy to grasp but I found myself thinking that a little more challenge might be in order. I'm looking for something better and I would like to hear your experiences with teaching your children to program in whatever language you have chosen to do so in. What did you like and dislike? How fast did they pick it up? Were they challenged? Frustrated? Thank you very much!

    Read the article

  • Adventures in Scrum: Lesson 1 &ndash; The failed Sprint

    - by Martin Hinshelwood
    I recently had a conversation with a product owner that wanted to have the Scrum team broken up into smaller units so that less time was wasted on the Scrum Ceremonies! Their complaint was around the need in Scrum to have the entire “Team” (7+-2) involved in the sizing of the work during the “Sprint Planning Meeting”.  The standard flippant answer of all Scrum professionals, “Well that's not Scrum”, does not get you any brownie points in these situations. The response could be “Well we are not doing Scrum then” which in turn leads to “We are doing Scrum…But, we have split the scrum team into units of 2/3 so that they can concentrate on a specific area of work”. While this may work, it is not Scrum and should not be called so… It is just a form of Agile. Don’t get me wrong at this stage, there is nothing wrong with Agile, just don’t call it Scrum. The reason that the Product Owner wants to do this is that, in effect, through a number of miscommunications and failings in our implementation of Scrum, there was NO unit of potentially Shippable software at the end of the first sprint. It does not matter to them that most Scrum teams will fail the first Sprint, even those that are high performing teams. Remember it is the product owners their money! We should NOT break up scrum teams into smaller units for the purpose of having less people tied up in the Scrum Ceremonies. The amount of backlog the Team selects is solely up to the Team… Only the Team can assess what it can accomplish over the upcoming Sprint. - Scrum Guide, Scrum.org The entire team must accept the work and in order to understand what they can accept they must be free to size it as a team. This both encourages common understanding and increases visibility on why team members think a task is of a particular size. This has the benefit of increasing the knowledge of the entire team in the problem domain. A new Team often first realizes that it will either sink or swim as a Team, not individually, in this meeting. The Team realizes that it must rely on itself. As it realizes this, it starts to self-organize to take on the characteristics and behaviour of a real Team. - Scrum Guide, Scrum.org This paragraph goes to the why of having the whole team at the meeting; The goal of Scrum it to produce a unit of potentially shippable software at the end of every Sprint. In order to achieve this we need high performing teams and this is what Scrum as a framework has been optimised to produce. I think that our Product Owner is understandably upset over loosing two weeks work and is losing sight the end goal of Scrum in the failures of the moment. As the man spending the money, I completely understand his perspective and I think that we should not have started Scrum on an internal project, but selected a customer  that is open to the ideas and complications of Scrum. So, what should we have NOT done on our first Scrum project: Should not have had 3 interns as the only on site resource – This lead to bad practices as the experienced guys were not there helping and correcting as they usually would. Should not have had the only experienced guys offsite – With both the experienced technical guys in completely different time zones it was difficult to get time for questions. Helping the guys on site was just plain impossible. Should not have used a part time ScrumMaster – Although the ScrumMaster attended all of the Ceremonies, because they are only in 2 full days of the week it makes it difficult for the team to raise impediments as they go. Should not have used a proxy product owner. – This was probably the worst decision that was made. Mainly because the proxy product owner did not have the same vision as the product owner. While Scrum does not explicitly reject the idea of a Proxy Product Owner, I do not think it works very well in practice. The “single wringable neck” needs to contain both the Money and the Vision as well as attending the required meetings. I will be brining all of these things up at the Sprint Retrospective and we will learn from our mistakes and move on. Do, Inspect then Adapt…   Technorati Tags: Scrum,Sprint Planing,Sprint Retrospective,Scrum.org,Scrum Guide,Scrum Ceremonies,Scrummaster,Product Owner Need Help? Professional Scrum Developer Training SSW has six Professional Scrum Developer Trainers who specialise in training your developers in implementing Scrum with Microsoft's Visual Studio ALM tools.

    Read the article

  • How can you become a competent web application security expert without breaking the law?

    - by hal10001
    I find this to be equivalent to undercover police officers who join a gang, do drugs and break the law as a last resort in order to enforce it. To be a competent security expert, I feel hacking has to be a constant hands-on effort. Yet, that requires finding exploits, testing them on live applications, and being able to demonstrate those exploits with confidence. For those that consider themselves "experts" in Web application security, what did you do to learn the art without actually breaking the law? Or, is this the gray area that nobody likes to talk about because you have to bend the law to its limits?

    Read the article

  • Unity3D: How To Smoothly Switch From One Camera To Another

    - by www.Sillitoy.com
    The Question is basically self explanatory. I have a scene with many cameras and I'd like to smoothly switch from one to another. I am not looking for a cross fade effect but more to a camera moving and rotating the view in order to reach the next camera point of view and so on. To this end I have tried the following code: firstCamera.transform.position.x = Mathf.Lerp(firstCamera.transform.position.x, nextCamer.transform.position.x,Time.deltaTime*smooth); firstCamera.transform.position.y = Mathf.Lerp(firstCamera.transform.position.y, nextCamera.transform.position.y,Time.deltaTime*smooth); firstCamera.transform.position.z = Mathf.Lerp(firstCamera.transform.position.z, nextCamera.transform.position.z,Time.deltaTime*smooth); firstCamera.transform.rotation.x = Mathf.Lerp(firstCamera.transform.rotation.x, nextCamera.transform.rotation.x,Time.deltaTime*smooth); firstCamera.transform.rotation.z = Mathf.Lerp(firstCamera.transform.rotation.z, nextCamera.transform.rotation.z,Time.deltaTime*smooth); firstCamera.transform.rotation.y = Mathf.Lerp(firstCamera.transform.rotation.y, nextCamera.transform.rotation.y,Time.deltaTime*smooth); But the result is actually not that good.

    Read the article

  • How should calculations be handled in a document database

    - by Morten
    Ok, so I have a program that basically logs errors into a nosql database. Right now there is just a single model for an error and its stored as a document in the nosql database. Basically I want to summarize across different errors and produce a summary of the "types" of errors that occured. Traditionally in a SQL database the this normalization would work with groupings, sums and averages but in a NoSQL database I assume I need to use mapreduce. My current model seems unfit for the task, how should I change the way I store "models" in order to make statistical analysis easy? Would a NoSQL database even be the right tool for this type of problem? I'm storing things in Google AppEngine's BigTable, so there are some limitations to think of as well.

    Read the article

  • Security of logging people in automatically from another app?

    - by Simon
    I have 2 apps. They both have accounts, and each account has users. These apps are going to share the same users and accounts and they will always be in sync. I want to be able to login automatically from one app to the other. So my solution is to generate a login_key, for example: 2sa7439e-a570-ac21-a2ao-z1qia9ca6g25 once a day. And provide a automated login link to the other app... for example if the user clicks on: https://account_name.securityhole.io/login/2sa7439e-a570-ac21-a2ao-z1qia9ca6g25/user/123 They are logged in automatically, session created. So here we have 3 things that a intruder has to get right in order to gain access; account name, login key, and the user id. Bad idea? Or should I can down the path of making one app an oauth provider? Or is there a better way?

    Read the article

  • Is the slow performance of programming languages a bad thing?

    - by Emanuil
    Here's how I see it. There's machine code and it's all that the computers needs in order to run something. The computers don't care about programming languages. It doesn't matter to them if the machine code comes from Perl, Python or PHP. Programming languages exist to serve programmers. Some programming languages run slower than others but that's not necessarily because there is something wrong with them. In many cases it's just because they do more things that otherwise programmers would have to do and by doing these things, they do better what they are supposed to do - serve programmers. So is the slower performance (at runtime) of a programming language really a bad thing?

    Read the article

  • User-Defined Customer Events & their impact (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what is the impact in having a user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events that extend or simulate base customer events, the ones that are included in the base product, ensure that there is no other impact either direct or indirect to other business functions that the application has to offer.  

    Read the article

  • Using proxy on my Ubuntu 12.04 and proxy exception problem

    - by user1343713
    im new in linux systems. I recently installed Ubuntu 12.04 LTS and it works fine with me, but i have a problem. I live in the university dorms and in order to use internet i have to put proxy settings which require authentication, i had a problem with downloading from the store but i read the solution you wrote here still sometimes it works sometimes :( i dont know why. But the big problem is if i want to open any link from inside the university i have to disable the proxy, on Windows there were an exception for using the proxy you can easily do it. is there any possible way of doing that in Ubuntu 12.04 Thank you for being helpful

    Read the article

  • tomcat behind apache

    - by dannynjust
    i am trying to use the mod_proxy_ajp to forward all the request from tocat.example.com to example.com:8080 here is what the tomcat server.xml looks like: <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> and here is the apache.conf config: <VirtualHost *:80> ServerName tomcat.example.com ServerAdmin [email protected] ErrorLog logs/tomcat.example.com-error_log CustomLog logs/tomcat.example.com-access_log common <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> ProxyPass / ajp://example:8009/ ProxyPassReverse / ajp://example:8009/ </VirtualHost> but it is not working, any idea?

    Read the article

  • Teaching logical/analytical thinking

    - by Joshua
    I have been trial running a club in which I teach programming for the past year and while they have progressed what they really lack is the most fundamental concept to programming, analytical thinking. As I now approach the second year of teaching to the children (aged 12 - 14) I am now realising that before I begin teaching them the syntax and how to actually program an app (or what they would rather, a game) I need to introduce them to analytical thinking first. I have already found Scratch and similar things such as Light-Bot and will most certainly be using the, to teach them how to implement their logical thinking but what I really need are some tips or articles on how to teach analytical thinking itself to children aged 12 - 14. What I'm looking for are some ideas on how to teach the kind of thinking that these kids will need in order to get them into programming, whether that be analytical, logical or critical. How and what should I teach them relating to the way their minds need to be wired when programming solutions to problems?

    Read the article

  • How to send credentials to linkedIn website and get oauth_verifier without signing in again [closed]

    - by akash kumar
    I am facing a problem sending credentials to another website so that I can login the user (automatically, not clicked on sign in here) and get an oauth_verifier value. I want to send the email address and the password through a form (submit button) from my website (e.g. a Liferay portal) to another website (e.g. LinkedIn), so that it automatically returns an oauth_verifier to my website. That means I don't want the user of my website to submit his email and password to LinkedIn again. My goal is to take the email and password of the user in my website and show the user his LinkedIn connection, message, job posting (again, in my website, not LinkedIn). I dont want the user redirected to the LinkedIn website to sign in there and then come back to my website. I have taken a consumer key and a secret key from LinkedIn for my web aplication. I am using the LinkedIn API and getting oauth_verifier for access token but in order to login, I have to take user to LinkedIn to sign in, while I want it to happen in the backend.

    Read the article

  • Is there a snippets program that allows for tab entry of variables across the mac?

    - by Jeremy Smith
    I love Sublime Text editor and the ability to create code snippets like so: <snippet> <content><![CDATA[\$("${1:selector}").${2:method}]]></content> <tabTrigger>jq</tabTrigger> </snippet> This allows me to type jq[tab] and then have it expand to $("selector").method where I am able to tab through the string 'selector' and 'method' in order to change them. But what I'd really like to do is use this same snippet when working in Chrome Dev Tools, so I was looking for a mac snippets program that could support this. However, the three programs that I looked at (Keyboard Maestro, Snippets, CodeBox) don't support the ability to tab through to highlight predetermined strings and change them. Does anyone know of an app that will do this?

    Read the article

  • Fan control on Acer Aspire One D255 netbook

    - by AdamB
    I have Ubuntu netbook edition on my Acer Aspire One D255 netbook and I notice that I always hear the fan working at 100% despite the actual temperature. I run the sensors command its only at 13c, theres no reason why the fan needs to be running this hard at that temp. root@adam-netbook:~# sensors acpitz-virtual-0 Adapter: Virtual device temp1: +13.0°C (crit = +100.0°C) I'm guessing I may need some drivers in order to interact with the fan? Does anyone have any experience with this? It also seems that "sensors" may not be all that accurate, the temperature never seems to fluctuate.

    Read the article

  • Is it true that first versions of C compilers ran for dozens of minutes and required swapping floppy disks between stages?

    - by sharptooth
    Inspired by this question. I heard that some very very early versions of C compilers for personal computers (I guess it's around 1980) resided on two or three floppy disks and so in order to compile a program one had to first insert the disk with "first pass", run the "first pass", then change to the disk with "second pass", run that, then do the same for the "third pass". Each pass ran for dozens of minutes so the developer lost lots of time in case of even a typo. How realistic is that claim? What were actual figures and details?

    Read the article

  • How and why are operating systems bootable from a USB?

    - by user114638
    I'm told to install ubuntu on my laptop for work in order to learn shell scripting. I've read the best way is to install ubuntu on a USB stick and partition my HDD. I'm curious how an OS is bootable from a USB stick? Is it literally just a small interface that can be put anywhere? This reminds me of a time I downloaded a game onto my USB stick, when I brought it to my friends house he told me it will run slow if I don't install it and only run it from the usb, is this different from running ubuntu from a usb? Will ubuntu be slow?

    Read the article

< Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >