Search Results

Search found 31563 results on 1263 pages for 'update manager'.

Page 366/1263 | < Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >

  • Oracle Magazine, July/August 2006

    Oracle Magazine July/August 2006 features articles on Oracle Enterprise Manager, Oracle OpenWorld, Green Mountain Coffee Roasters, Retailers, Identity Management, XML, SQL, ODP.NET Performance, Oracle ADF, Oracle Application Express, and much more.

    Read the article

  • How to automatically render all opaque meshes with a specific shader?

    - by dsilva.vinicius
    I have a specular outline shader that I want to be used on all opaque meshes of the scene whenever a specific camera renders. The shader is working properly when it is manually applied to some material. The shader is as follows: Shader "Custom/Outline" { Properties { _Color ("Main Color", Color) = (.5,.5,.5,1) _OutlineColor ("Outline Color", Color) = (1,0.5,0,1) _Outline ("Outline width", Range (0.0, 0.1)) = .05 _SpecColor ("Specular Color", Color) = (0.5, 0.5, 0.5, 1) _Shininess ("Shininess", Range (0.03, 1)) = 0.078125 _MainTex ("Base (RGB) Gloss (A)", 2D) = "white" {} } SubShader { Tags { "Queue"="Overlay" "RenderType"="Opaque" } Pass { Name "OUTLINE" Tags { "LightMode" = "Always" } Cull Off ZWrite Off // Uncomment to show outline always. //ZTest Always CGPROGRAM #pragma target 3.0 #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" struct appdata { float4 vertex : POSITION; float3 normal : NORMAL; }; struct v2f { float4 pos : POSITION; float4 color : COLOR; }; float _Outline; float4 _OutlineColor; v2f vert(appdata v) { // just make a copy of incoming vertex data but scaled according to normal direction v2f o; o.pos = mul(UNITY_MATRIX_MVP, v.vertex); float3 norm = mul ((float3x3)UNITY_MATRIX_IT_MV, v.normal); float2 offset = TransformViewToProjection(norm.xy); o.pos.xy += offset * o.pos.z * _Outline; o.color = _OutlineColor; return o; } float4 frag(v2f fromVert) : COLOR { return fromVert.color; } ENDCG } UsePass "Specular/FORWARD" } FallBack "Specular" } The camera used fot the effect has just a script component which setups the shader replacement: using UnityEngine; using System.Collections; public class DetectiveEffect : MonoBehaviour { public Shader EffectShader; // Use this for initialization void Start () { this.camera.SetReplacementShader(EffectShader, "RenderType=Opaque"); } // Update is called once per frame void Update () { } } Unfortunately, whenever I use this camera I just see the background color. Any ideas?

    Read the article

  • Transferring domains when registered owner's email address is incorrect

    - by www.jacob-
    Years ago I registered some domains using a now expired university email address. The other contact details for the registered owner (postal address and phone number) are still correct. In order to change/update the email address, the registrar wants to charge £20 a domain. I would like to transfer the domains away from the current registrar. I can unlock the domains and generate an auth code. However, I cannot authorise the transfer by email as any emails sent to the registered owner's address will bounce. This seems to rule out most registrars I have tried. Are there any ways to transfer these domains without paying the £20 fee to update the registered owner's details?

    Read the article

  • .desktop shortcuts aren't working for java applications in LXDE

    - by chaz
    I just installed minecraft on my LXDE desktop/Lubuntu machine and I'm trying to create a .desktop file on the desktop that executes java -jar ~/minecraftlauncher.jar. The command works in bash scripts and the terminal but refuses to work when I click on my .DESKTOP shortcut which is suppose to execute the same command. I've experimented with other jars and they can't seem to start too. Here is my xsession log: ** (pcmanfm:1572): DEBUG: launch command: <java -jar ~/Downloads/minecraft_server.jar> ** (pcmanfm:1572): DEBUG: sn_id = pcmanfm-1572-administrator-Dimension-3000-java-14_TIME14031891 Unable to access jarfile ~/Downloads/minecraft_server.jar ** (pcmanfm:1572): DEBUG: launch command: <java -jar ~/minecraftlauncher.jar> ** (pcmanfm:1572): DEBUG: sn_id = pcmanfm-1572-administrator-Dimension-3000-java-15_TIME14070158 Unable to access jarfile ~/minecraftlauncher.jar UPDATE: Whoops, it seems to work when I give an absolute path. I guess the home path is something else. UPDATE: I guess X doesn't resolve the home specifier. I ran a .desktop file that executed a script that outputs the current directory, and it seems to be correct.

    Read the article

  • Downloading 12.04 Beta today - Does it now have the daily distro updates?

    - by Stephen Myall
    I downloaded the 12.04 Beta 2 on the day of launch and have been testing and sending bug reports. I now want to install it on another machine. My question is do I use the original image I downloaded and apply the daily distro updates through Update manager OR do a download a new image (because it now has the distro updates included). To be clear I am asking does the 12.04 Beta 2 ISO get updated on a daily basis?

    Read the article

  • Updating large icon in iTunes Connect

    - by Shaggy Frog
    Just wanted to see if I understand properly how/when one can change the "Large icon" for their iOS app in iTunes Connect. Questions are in bold below. To start, first the facts (as I gather) from version 6.6 of the iTC guide (March 2, 2011): The Large Icon is a "locked" piece of version information "You will only be permitted to edit Locked version information when your app is in an Editable state" The "Editable" states are: Prepare For Upload Waiting For Upload Waiting For Review Waiting For Export Compliance Upload Received Rejected Developer Rejected Invalid Binary Missing Screenshot Am I missing anything up until this point? If not, then am I correct to say that the only time I can change an app's Large Icon is when I update the application? Here's a more specific use case: My app is currently on sale, version 2.0 I have version 2.1 ready, and I want the update to coincide with a sale, so I also put a "SALE" banner on top of my large icon (what most devs are doing) I have to upload this "SALE" Large Icon when I upload the binary. If I wait until it's been reviewed, it's too late, and I'll have developer-reject the binary so I can fix it. Is this correct? Say I want the sale to last a week. So at the end of that week, I'll want to switch my Large Icon back to the pre-"SALE" version. Will I necessarily have to upload a new binary at that time? (Also posted on the Developer Forums, but it's getting no love there...)

    Read the article

  • Applications: The mathematics of movement, Part 1

    - by TechTwaddle
    Before you continue reading this post, a suggestion; if you haven’t read “Programming Windows Phone 7 Series” by Charles Petzold, go read it. Now. If you find 150+ pages a little too long, at least go through Chapter 5, Principles of Movement, especially the section “A Brief Review of Vectors”. This post is largely inspired from this chapter. At this point I assume you know what vectors are, how they are represented using the pair (x, y), what a unit vector is, and given a vector how you would normalize the vector to get a unit vector. Our task in this post is simple, a marble is drawn at a point on the screen, the user clicks at a random point on the device, say (destX, destY), and our program makes the marble move towards that point and stop when it is reached. The tricky part of this task is the word “towards”, it adds a direction to our problem. Making a marble bounce around the screen is simple, all you have to do is keep incrementing the X and Y co-ordinates by a certain amount and handle the boundary conditions. Here, however, we need to find out exactly how to increment the X and Y values, so that the marble appears to move towards the point where the user clicked. And this is where vectors can be so helpful. The code I’ll show you here is not ideal, we’ll be working with C# on Windows Mobile 6.x, so there is no built-in vector class that I can use, though I could have written one and done all the math inside the class. I think it is trivial to the actual problem that we are trying to solve and can be done pretty easily once you know what’s going on behind the scenes. In other words, this is an excuse for me being lazy. The first approach, uses the function Atan2() to solve the “towards” part of the problem. Atan2() takes a point (x, y) as input, Atan2(y, x), note that y goes first, and then it returns an angle in radians. What angle you ask. Imagine a line from the origin (0, 0), to the point (x, y). The angle which Atan2 returns is the angle the positive X-axis makes with that line, measured clockwise. The figure below makes it clear, wiki has good details about Atan2(), give it a read. The pair (x, y) also denotes a vector. A vector whose magnitude is the length of that line, which is Sqrt(x*x + y*y), and a direction ?, as measured from positive X axis clockwise. If you’ve read that chapter from Charles Petzold’s book, this much should be clear. Now Sine and Cosine of the angle ? are special. Cosine(?) divides x by the vectors length (adjacent by hypotenuse), thus giving us a unit vector along the X direction. And Sine(?) divides y by the vectors length (opposite by hypotenuse), thus giving us a unit vector along the Y direction. Therefore the vector represented by the pair (cos(?), sin(?)), is the unit vector (or normalization) of the vector (x, y). This unit vector has a length of 1 (remember sin2(?) + cos2(?) = 1 ?), and a direction which is the same as vector (x, y). Now if I multiply this unit vector by some amount, then I will always get a point which is a certain distance away from the origin, but, more importantly, the point will always be on that line. For example, if I multiply the unit vector with the length of the line, I get the point (x, y). Thus, all we have to do to move the marble towards our destination point, is to multiply the unit vector by a certain amount each time and draw the marble, and the marble will magically move towards the click point. Now time for some code. The application, uses a timer based frame draw method to draw the marble on the screen. The timer is disabled initially and whenever the user clicks on the screen, the timer is enabled. The callback function for the timer follows the standard Update and Draw cycle. private double totLenToTravelSqrd = 0; private double startPosX = 0, startPosY = 0; private double destX = 0, destY = 0; private void Form1_MouseUp(object sender, MouseEventArgs e) {     destX = e.X;     destY = e.Y;     double x = marble1.x - destX;     double y = marble1.y - destY;     //calculate the total length to be travelled     totLenToTravelSqrd = x * x + y * y;     //store the start position of the marble     startPosX = marble1.x;     startPosY = marble1.y;     timer1.Enabled = true; } private void timer1_Tick(object sender, EventArgs e) {     UpdatePosition();     DrawMarble(); } Form1_MouseUp() method is called when ever the user touches and releases the screen. In this function we save the click point in destX and destY, this is the destination point for the marble and we also enable the timer. We store a few more values which we will use in the UpdatePosition() method to detect when the marble has reached the destination and stop the timer. So we store the start position of the marble and the square of the total length to be travelled. I’ll leave out the term ‘sqrd’ when speaking of lengths from now on. The time out interval of the timer is set to 40ms, thus giving us a frame rate of about ~25fps. In the timer callback, we update the marble position and draw the marble. We know what DrawMarble() does, so here, we’ll only look at how UpdatePosition() is implemented; private void UpdatePosition() {     //the vector (x, y)     double x = destX - marble1.x;     double y = destY - marble1.y;     double incrX=0, incrY=0;     double distanceSqrd=0;     double speed = 6;     //distance between destination and current position, before updating marble position     distanceSqrd = x * x + y * y;     double angle = Math.Atan2(y, x);     //Cos and Sin give us the unit vector, 6 is the value we use to magnify the unit vector along the same direction     incrX = speed * Math.Cos(angle);     incrY = speed * Math.Sin(angle);     marble1.x += incrX;     marble1.y += incrY;     //check for bounds     if ((int)marble1.x < MinX + marbleWidth / 2)     {         marble1.x = MinX + marbleWidth / 2;     }     else if ((int)marble1.x > (MaxX - marbleWidth / 2))     {         marble1.x = MaxX - marbleWidth / 2;     }     if ((int)marble1.y < MinY + marbleHeight / 2)     {         marble1.y = MinY + marbleHeight / 2;     }     else if ((int)marble1.y > (MaxY - marbleHeight / 2))     {         marble1.y = MaxY - marbleHeight / 2;     }     //distance between destination and current point, after updating marble position     x = destX - marble1.x;     y = destY - marble1.y;     double newDistanceSqrd = x * x + y * y;     //length from start point to current marble position     x = startPosX - (marble1.x);     y = startPosY - (marble1.y);     double lenTraveledSqrd = x * x + y * y;     //check for end conditions     if ((int)lenTraveledSqrd >= (int)totLenToTravelSqrd)     {         System.Console.WriteLine("Stopping because destination reached");         timer1.Enabled = false;     }     else if (Math.Abs((int)distanceSqrd - (int)newDistanceSqrd) < 4)     {         System.Console.WriteLine("Stopping because no change in Old and New position");         timer1.Enabled = false;     } } Ok, so in this function, first we subtract the current marble position from the destination point to give us a vector. The first three lines of the function construct this vector (x, y). The vector (x, y) has the same length as the line from (marble1.x, marble1.y) to (destX, destY) and is in the direction pointing from (marble1.x, marble1.y) to (destX, destY). Note that marble1.x and marble1.y denote the center point of the marble. Then we use Atan2() to get the angle which this vector makes with the positive X axis and use Cosine() and Sine() of that angle to get the unit vector along that same direction. We multiply this unit vector with 6, to get the values which the position of the marble should be incremented by. This variable, speed, can be experimented with and determines how fast the marble moves towards the destination. After this, we check for bounds to make sure that the marble stays within the screen limits and finally we check for the end condition and stop the timer. The end condition has two parts to it. The first case is the normal case, where the user clicks well inside the screen. Here, we stop when the total length travelled by the marble is greater than or equal to the total length to be travelled. Simple enough. The second case is when the user clicks on the very corners of the screen. Like I said before, the values marble1.x and marble1.y denote the center point of the marble. When the user clicks on the corner, the marble moves towards the point, and after some time tries to go outside of the screen, this is when the bounds checking comes into play and corrects the marble position so that the marble stays inside the screen. In this case the marble will never travel a distance of totLenToTravelSqrd, because of the correction is its position. So here we detect the end condition when there is not much change in marbles position. I use the value 4 in the second condition above. After experimenting with a few values, 4 seemed to work okay. There is a small thing missing in the code above. In the normal case, case 1, when the update method runs for the last time, marble position over shoots the destination point. This happens because the position is incremented in steps (which are not small enough), so in this case too, we should have corrected the marble position, so that the center point of the marble sits exactly on top of the destination point. I’ll add this later and update the post. This has been a pretty long post already, so I’ll leave you with a video of how this program looks while running. Notice in the video that the marble moves like a bot, without any grace what so ever. And that is because the speed of the marble is fixed at 6. In the next post we will see how to make the marble move a little more elegantly. And also, if Atan2(), Sine() and Cosine() are a little too much to digest, we’ll see how to achieve the same effect without using them, in the next to next post maybe. Ciao!

    Read the article

  • Upgrade from 13.04 to 13.10 fails

    - by John
    Tried to upgrade my computer from 13.04 to 13.10, but directly after pressing the upgrade, a couple files appear to download, and then the window disappears and nothing happens... I tried it from the terminal window and this is the output (same result) jessijo@halesite3:~$ update-manager Checking for a new Ubuntu release authenticate 'saucy.tar.gz' against 'saucy.tar.gz.gpg' extracting 'saucy.tar.gz' Real-time signal 0 Any ideas?

    Read the article

  • Automatic storing package before installing it on .deb based system?

    - by macias
    The reason I am asking this question is I am concerned about simple rollback (I already read how to find out what packages were installed). So I would like to set global (per entire system) option, that forces system to store each package before installing/updating it. With such workflow, I could update whatever I want, and if for example the newest version of Dolphin would be worse than previous one I could simply go to directory with stored packages and install previous version instead (the previous version is either base version -- on ISO -- or version from previous update). Is there such feature as global option to automatically store each package before install? It have to be guaranteed that no package is updated on-fly, i.e. without being stored before. I am learning LMDE, but answer for any .deb based system would be fine -- Ubuntu, Debian, you name it.

    Read the article

  • PHP safe_mode is a pain, looking for advice (Ubuntu 12.04 server, public webserver)

    - by user73279
    Maybe askUbuntu isn't the right forum or I haven't provided the right search query but I haven't seen anything in my searching of askUbuntu on PHP safe_mode. I get lots of Windows Safe Mode and Ubuntu Safe Mode results but not PHP safe_mode. So I keep running into one issue after another regarding PHP safe_mode. (I write a lot of my own PHP code for various site maintenance tools and such.) I know safe_mode is going away in the next version of PHP but I still see a fair amount of advice recommending that you leave it enabled. I've recently consolidated from 3 servers down to 1 and at least one of those old servers had safe_mode disabled without any issues. (The lack of issues may have simply been a matter of good luck.) None of the previous 3 gave me this much trouble so I'm guessing so additional php.ini/PHP safe_mode setting was turned on for the new server. I primarily run WordPress for my websites with a few MediaWiki sites sprinkled in. And I am currently running into an issue using WordPress's auto update feature as it doesn't seem to be able to use fopen. WordPress is not relaying the actual error message to me but since I was just able to update the plugins I'm using this is a safe_mode problem. I've had a lot of safe_mode issues since consolidating to this new server. Long story short, the advice I'd seen to use safe_mode was all at least 2 years old. Do I really need it? If I disable PHP safe_mode are there a good set of security measures I should implement - i.e. chmod 640 /var/www/..., add this to your .htaccess, etc - to protect my server/sites? Thanks

    Read the article

  • MySQL works with straight php, but not in phpMyAdmin or in Drupal

    - by Marek
    I just updated from PHP 5.1 to 5.2 and both drupal and phpMyAdmin stopped being able to save information. I've checked the mysql user permissions - they look ok. I wrote some simple php to insert a row into a table, and it works, but if I try to do the same thing in phpMyAdmin, it just says "no change". phpMyAdmin will delete rows, select rows, but not insert or update them. Drupal does the same thing - it will select info from the tables ok, but not insert or update (or delete). Any ideas? I'm really starting to get desperate! Cheers, Marek

    Read the article

  • How can I switch user in a shell and use the existing gnome display session?

    - by z7sg
    If I switch user in a terminal. su bob I can't open gedit because bob doesn't own the display. If I execute xhost + before switching to bob I can open the display for some applications but not all. I get the following output when trying to execute gedit: (crashreporter:4415): GnomeUI-WARNING *: While connecting to session manager: None of the authentication protocols specified are supported. * GLib-GIO:ERROR:/build/buildd/glib2.0-2.28.6/./gio/gdbusconnection.c:2279:initable_init: assertion failed: (connection-initialization_error == NULL)

    Read the article

  • Is it OK to reoccupy my old GitHub username to protect repository redirections?

    - by Idan Arye
    I'm considering changing my GitHub username from the old alias I was using as a kid to my real name. I'm concerned about my repository URLs. GitHub will redirect the old URLs, but if someone creates a new account using my old username and creates a repository with the same name as one of my repositories, the URL redirection will break and the URL will lead to their repository, not mine. Now, this is understandable, and GitHub recommends to not count on the redirect in the long term, and update all the remotes, but I'm concerned about some Vim plugins I'm hosting on GitHub. It's a common practice to manage Vim plugins with Git(either as separate repositories or as submodules), and if one of the plugins' remotes break you'll get error messages when you try to batch-update all your plugins(it happened to me once...). It's not that hard to solve, and the chances that'll happen are slim, but I would still like to avoid causing trouble to the users of my plugins... To prevent this, I think to create a new account with my old username. That way I can avoid the risk of someone else taking my old username and breaking the redirects of my old repositories. While researching this approach I've found GitHub's Name Squatting Policy. According to that policy, GitHub can delete or rename inactive accounts. To my understanding, they do this to prevent Cybersquatting, but surely this isn't the case with my fake account - I'm not holding someone else's name in an attempt to sell it to them, I'm merely occupying a name I was using to protect my old URLs... So, is it acceptable to go with this plan an create a fake account with my old username?

    Read the article

  • Transmission-daemon not picking up on watch directory

    - by Mild Fuzz
    Trying to get my transmission-daemon to pick up files from a dropbox folder, to make remote starting easier (it's a headless system). As far as I can tell, the settings.json file is as expected, but none of the files I place in the folder get picked up. I have checked that dropbox is syncing correctly. Here is the whole settings.json file, but the relevant lines are included below: "watch-dir": "/home/john/Dropbox/torrents", "watch-dir-enabled": true Update It appears to be a permissions issue. From /var/log/syslog: Unable to watch "/home/john/Dropbox/torrents": Permission denied (watch.c:79) I have tried stopping the daemon - sudo service transmission-daemon stop - changing permissions of folder using chown - sudo chown -R john /home/john/Dropbox/torrents - restarting daemon - sudo service transmission-daemon start Same result, however Update 2 Permissions for the folder are: drwsrwsrwx 2 john debian-transmission 4096 2012-04-09 19:40

    Read the article

  • Play or Lift: which one is more explicit?

    - by Andrea
    I am going to investigate web development with Scala, and the choice is between learning Lift or Play: probably I will not have enough time to try both, at least at first. Now, many comparisons between the two are available on the internet, but I would like to know how do they compare with respect to being explicit and involving less magic. Let me explain what I mean by example. I have used, to various degrees, CakePHP, symfony2, Django and Grails. I feel a very clear distinction between Django and symfony2, which are very explicit about what you are doing, and Grails and CakePHP, which try to do their best to guess what you are trying to achieve and often feel "magical". Let me give some examples comparing Django and Grails. In Django, views are functions that take a request as input and return a response. You can instantiate explicitly an instance of HttpResponse and populate its body with a string, or you can use shortcut functions to leverage the template system. In any case the return value from your view always has the same type. In contrast, the render method from Grails is highly polymorphic. You can throw a context at it and it will try to render a template which is found by convention using that context. Or you can pass it a pair of a template path and a context and that will work too. Or a string. Or XML. Grails tries hard to make sense of whatever you return from your controller. In the Django ORM, each model class has a static attribute representing the manager for that class. That manager exposes a fluent interface to build querysets. In Grails, you can have a similar functionality by composing detached criteria. Still, the most common way to query objects seems to be the use of runtime-generated methods like FindUserByEmailNotNull or FindPostByDateGreaterThan. I will not go further, but my point is that in Django-like frameworks you have control over the whole flow of the request/response process, while in Grails-like ones I feel I only have to feel the blanks and the framework will manage the rest of the flow for me. This is not to criticize Grails or CakePHP; which type you prefer is mainly a matter of preference. In fact, I happen to like some aspects of Grails, but I feel more comfortable with a framework which does less for me. Back to the point of the question: which one among Play and Lift is more explicit about what you do and which one tries to simplify more what you have to do with a layer of "magic"?

    Read the article

  • Cheating on Technical Debt

    - by Tony Davis
    One bad practice guaranteed to cause dismay amongst your colleagues is passing on technical debt without full disclosure. There could only be two reasons for this. Either the developer or DBA didn’t know the difference between good and bad practices, or concealed the debt. Neither reflects well on their professional competence. Technical debt, or code debt, is a convenient term to cover all the compromises between the ideal solution and the actual solution, reflecting the reality of the pressures of commercial coding. The one time you’re guaranteed to hear one developer, or DBA, pass judgment on another is when he or she inherits their project, and is surprised by the amount of technical debt left lying around in the form of inelegant architecture, incomplete tests, confusing interface design, no documentation, and so on. It is often expedient for a Project Manager to ignore the build-up of technical debt, the cut corners, not-quite-finished features and rushed designs that mean progress is satisfyingly rapid in the short term. It’s far less satisfying for the poor person who inherits the code. Nothing sends a colder chill down the spine than the dawning realization that you’ve inherited a system crippled with performance and functional issues that will take months of pain to fix before you can even begin to make progress on any of the planned new features. It’s often hard to justify this ‘debt paying’ time to the project owners and managers. It just looks as if you are making no progress, in marked contrast to your predecessor. There can be many good reasons for allowing technical debt to build up, at least in the short term. Often, rapid prototyping is essential, there is a temporary shortfall in test resources, or the domain knowledge is incomplete. It may be necessary to hit a specific deadline with a prototype, or proof-of-concept, to explore a possible market opportunity, with planned iterations and refactoring to follow later. However, it is a crime for a developer to build up technical debt without making this clear to the project participants. He or she needs to record it explicitly. A design compromise made in to order to hit a deadline, be it an outright hack, or a decision made without time for rigorous investigation and testing, needs to be documented with the same rigor that one tracks a bug. What’s the best way to do this? Ideally, we’d have some kind of objective assessment of the level of technical debt in a software project, although that smacks of Science Fiction even as I write it. I’d be interested of hear of any methods you’ve used, but I’m sure most teams have to rely simply on the integrity of their colleagues and the clear perceptions of the project manager… Cheers, Tony.

    Read the article

  • Progress bar in Super Hexagon using OpenGL ES 2 (Android)

    - by user16547
    I'm wondering how the progress bar in Super Hexagon was made. (see image, top left) Actually I am not very sure how to implement a progress bar at all using OpenGL ES 2 on Android, but I am asking specifically about the one used in Super Hexagon because it seems to me less straightforward / obvious than others: the bar changes its colour during game play. I think one possibility is to use the built-in Android progress bar. I can see from some Stackoverflow questions that you can change the default blue colour to whatever you want, but I'm not sure whether you can update it during the game play. The other possibility I can think of for implementing a progress bar is to have a small texture that starts with a scale of 0 and that you keep scaling until it reaches the maximum size, representing 100%. But this suffers from the same problem as before: you'll not be able to update the colour of the texture during run-time. It's fixed. So what's the best way to approach this problem? *I'm assuming he didn't use a particular library, although if he did, it would be interesting to know. I'm interested in a pure OpenGL ES 2 + Android solution.

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • CodePlex Daily Summary for Wednesday, June 12, 2013

    CodePlex Daily Summary for Wednesday, June 12, 2013Popular ReleasesWeb Pages CMS: 0.5.0.5: Added empty media directoryWindows 8 App Desing Reference Template: Education Big Picture: EducationBigPicture: Download includes the following -> Source (C# and JS) -> Package -> Snapshots -> DocumentationModern UI for WPF: Modern UI 1.0.4: The ModernUI assembly including a demo app demonstrating the various features of Modern UI for WPF. Related downloads NuGet ModernUI for WPF is also available as NuGet package in the NuGet gallery, id: ModernUI.WPF Download Modern UI for WPF Templates A Visual Studio 2012 extension containing a collection of project and item templates for Modern UI for WPF. The extension includes the ModernUI.WPF NuGet package. DownloadToolbox for Dynamics CRM 2011: XrmToolBox (v1.2013.6.11): XrmToolbox improvement Add exception handling when loading plugins Updated information panel for displaying two lines of text Tools improvementMetadata Document Generator (v1.2013.6.10)New tool Web Resources Manager (v1.2013.6.11)Retrieve list of unused web resources Retrieve web resources from a solution All tools listAccess Checker (v1.2013.2.5) Attribute Bulk Updater (v1.2013.1.17) FetchXml Tester (v1.2013.3.4) Iconator (v1.2013.1.17) Metadata Document Generator (v1.2013.6.10) Privilege...Document.Editor: 2013.23: What's new for Document.Editor 2013.23: New Insert Emoticon support Improved Format support Minor Bug Fix's, improvements and speed upsChristoc's DotNetNuke Module Development Template: DotNetNuke 7 Project Templates V2.4 for VS2012: V2.4 - Release Date 6/10/2013 Items addressed in this 2.4 release Updated MSBuild Community Tasks reference to 1.4.0.61 Setting up your DotNetNuke Module Development Environment Installing Christoc's DotNetNuke Module Development Templates Customizing the latest DotNetNuke Module Development Project TemplatesLayered Architecture Sample for .NET: Leave Sample - June 2013 (for .NET 4.5): Thank You for downloading Layered Architecture Sample. Please read the accompanying README.txt file for setup and installation instructions. This is the first set of a series of revised samples that will be released to illustrate the layered architecture design pattern. This version is only supported on Visual Studio 2012. This samples illustrates the use of ASP.NET Web Forms, ASP.NET Model Binding, Windows Communications Foundation (WCF), Windows Workflow Foundation (WF) and Microsoft Ente...Papercut: Papercut 2013-6-10: Feature: Shows From, To, Date and Subject of Email. Feature: Async UI and loading spinner. Enhancement: Improved speed when loading large attachments. Enhancement: Decoupled SMTP server into secondary assembly. Enhancement: Upgraded to .NET v4. Fix: Messages lost when received very fast. Fix: Email encoding issues on display/Automatically detect message Encoding Installation Note:Installation is copy and paste. Incoming messages are written to the start-up directory of Papercut. If you do n...SFDL.NET: SFDL.NET v1.1.0.4: Changelog: Ciritical Bug Fixed : Downloading of Files not possibleMapWinGIS ActiveX Map and GIS Component: MapWinGIS v4.8.8 Release Candidate - 32 Bit: This is the first release candidate of MapWinGIS. Please test it thoroughly.MapWindow 4: MapWindow GIS v4.8.8 - Release Candidate - 32Bit: Download the release notes here: http://svn.mapwindow.org/svnroot/MapWindow4Dev/Bin/MapWindowNotes.rtfLINQ to Twitter: LINQ to Twitter v2.1.06: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.VR Player: VR Player 0.3 ALPHA: New plugin system with individual folders TrackIR support Maya and 3ds max formats support Dual screen support Mono layouts (left and right) Cylinder height parameter Barel effect factor parameter Razer hydra filter parameter VRPN bug fixes UI improvements Performances improvements Stabilization and logging with Log4Net New default values base on users feedback CTRL key to open menuSimCityPak: SimCityPak 0.1.0.8: SimCityPak 0.1.0.8 New features: Import BMP color palettes for vehicles Import RASTER file (uncompressed 8.8.8.8 DDS files) View different channels of RASTER files or preview of all layers combined Find text in javascripts TGA viewer Ground textures added to lot editor Many additional identified instances and propertiesWsus Package Publisher: Release v1.2.1306.09: Add more verifications on certificate validation. WPP will not let user to try publishing an update until the certificate is valid. Add certificate expiration date on the 'About' form. Filter Approbation to avoid a user to try to approve an update for uninstallation when the update do not support uninstallation. Add the server and console version on the 'About' form. WPP will not let user to publish an update until the server and console are not at the same level. WPP do not let user ...AJAX Control Toolkit: June 2013 Release: AJAX Control Toolkit Release Notes - June 2013 Release Version 7.0607June 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - Instructions for using the AJAX Control Toolkit with ASP.NET 4.5 can be found at...Rawr: Rawr 5.2.1: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...VG-Ripper & PG-Ripper: PG-Ripper 1.4.13: changes NEW: Added Support for "ImageJumbo.com" links FIXED: Ripping of Threads with multiple pagesJson.NET: Json.NET 5.0 Release 6: New feature - Added serialized/deserialized JSON to verbose tracing New feature - Added support for using type name handling with ISerializable content Fix - Fixed not using default serializer settings with primitive values and JToken.ToObject Fix - Fixed error writing BigIntegers with JsonWriter.WriteToken Fix - Fixed serializing and deserializing flag enums with EnumMember attribute Fix - Fixed error deserializing interfaces with a valid type converter Fix - Fixed error deser...Biller: Biller 1.49: This release fixes company sales statisticsNew Projectsapiviewer: Documentation viewer for your service API.ArgusLib: Set of libraries I use in most of my projects.arwani: this is armani project.Blue Mercs Data Gateway: The Data Gateway is an open source project which simplifies coding fluently database SQL statements, stored procedure, connections and transactions.Convert Hashtable Rows into DataTable Columns in C#: Simplest way to convert a Hashtable into a DataTable with all the Hashtable rows converted into DataTable columns.everynet: EveryNet is an internet service provider servicing homes Fraktalysator: Software to visualize fractals such as the Mandelbrot Set. Still in early development state.Free BarCode API for .NET: Freee BarCode API for .NETGraphic filters in WPF: FiltersGrid Plugin: Grid Plugin , Turn your <table> into a fully functional price grid.mediamonitor: this is mediamonitor project.MyCodingStuffs: This solution includes C#, WCF, MVC4.0, ASP.net libraries.OpenErp .Net Connector: Access to OpenErp from .NetPhong and Flat shading: Phong shadingPixel Replacer: The Pixel Replacer is a simple library for replacing pixel colors with a new color, by setting a filter rule.plainetl in vb.net: a one db to another db transform, threaded, commandline tool, ... initially created to get a flat-File db (FoxPro) into a mssql.Programming in HTML5 with JavaScript and CSS3: Contains material developed while visiting a Microsoft course for the MCSD exam 70-480ProjectLinker2012: This tool helps to automatically create and maintain links from a source project to a target project to share code that is common to Silverlight and WPF.SALDERA Project: SALDERA TEST Project Summary Mire lesz ez jó? Majd Kiderül.TCPMessageServer: Simple TCP Client/Server used to pass an object between client and serverThats-Me Dot Net API: Thats-Me Dot Net API is a library which implements the Thats-Me.ch API for the Dot Net Framework.T-Sql Code Documentor: Document T-SQL Code and Extract Comments for all objects in a database.Windows 8 App Design Reference Template: Measurement: Measurement template will help if you want to build an app which addresses Mass/Weight, Distance/Length, Capacity/Volume etc. measurement placeholders.Windows 8 App Design Reference Template: Weather Clock: Weather Clock Template will help if you want to build an app which has the features to add Current/Previous day temperatures and add cities for coverage. Windows 8 App Desing Reference Template: Education Big Picture: Education Big Picture Template will help if you want to build an app which has an option for showcasing campus details, courses, student life etc dataWinodws 8 App Design Reference Template: Social Feed: Social Feed template will help if you want to build an app which has the Messaging, Facebook, twitter feeds sections. WP7FlacPlayer: Implemented JFlacLib in C# A simple player to play flac in windows phone system

    Read the article

  • Alcatel-Lucent: Enterprise 2.0: The Top 5 Things I would Do Over

    - by Kellsey Ruppel
    Happy Monday! Does anyone else feel as if the weekend went entirely too quickly? At least for those of us in the United States, we have the 4th of July Holiday next week to look forward to This week on the blog, we are going to focus on "WebCenter by Example" and highlight best practices from customers and partners. I recently came across this article and I think this is a great example of how we can learn from one another when it comes to social collaboration adoption. Do you agree with Jem? What things or best practices have you learned in your organizations?  By Jem Janik, Enterprise community manager, Alcatel-Lucent  Not so long ago, Engage, the Alcatel-Lucent employee social network and collaboration platform, celebrated its third birthday. With more than 25,000 members actively interacting each month, Engage has been a big enough success that it’s been the subject of external articles, and often those of us who helped launch it will go out and speak about what aspects contributed to that success. Hindsight is still 20/20 and what it takes to successfully launch an enterprise 2.0 community is fairly well-known now.  Today I want to tell you what I suspect you really want to know about.  As the enterprise community manager for Engage, after three years in, what are the top 5 things I wish we (and I mostly mean me) could do over? #5 Define your analytics solution from the start There is so much to do when you launch a community and initially growing it without complete chaos is quite a task.  It doesn’t take too long to get to a point where you want to focus your continued efforts in growing company collaboration.  Do people truly talk across regional boundaries or have we shifted siloed conversations to a new platform.  Is there one organization that doesn’t interact with another? If you are lucky you’ll have someone in your community team well versed in the world of databases and SQL queries, but it takes time to figure out what backend analytics data actually means. Professional support can be expensive and it may be hard to justify later as it typically has the community manager as the only main customer.  Figure out what you think you’ll want to know and how to get it early on. The sooner the better even if it doesn’t seem that critical at the time. #4 Lobbies guide you to the right places One piece of feedback that comes up more and more as we keep growing Engage is it’s hard to find stuff, or new people are not sure where to start. Something we’re doing now is defining some general topic areas of interest to be like “lobbies” into the platform and some common hashtags to go with them. I liken this to walking into a large medical or professional building for the first time.  There are hundreds of offices, and you look to a sign in the lobby to get guided to the right place for you.  We’re building that sign for members now, but again we missed the boat as the majority of the company has had their initial Engage experience. #3 Clean up, clean up, clean up Knowledge work and folksonomies are messy! The day we opened the doors to Engage I would have said we should keep everything ever created in Engage with an argument that it was a window into our collective knowledge so nothing should go.  Well, 6000+ groups and 200,000+ pieces of content later, I’ve changed my mind.  As previously mentioned, with too much “stuff” the system can be overwhelming to new members and it makes it harder to get what you’re looking for.   Do we need that help document about a tool we no longer have? NO!  Do we need that group that had 1 document and 2 discussions in the last two years? NO! Should we only have one group about a given topic instead of 4?  YES! Last fall, Engage defined a cleanup process for groups not used for a long time.  We also formed a volunteer cleaning army who are extra eyes on the hunt for “stuff” that should be updated, merged, or deleted.  It’s better late than never, but in line with what’s becoming a theme I wish these efforts had started earlier. #2 Communications & local community management One of the most important aspects of my job is to make sure people who should be talking to each other are actually doing it.  Connecting people to the other people they should know, the groups they should join, a piece of content that shouldn’t be missed.   I have worked both inside and outside of communications teams, and they are the best informed people in your company.  They know when something big is coming, how it impacts employees, how it fits with strategy, who else knows more, etc.  Having communications professionals who are power users can help scale up community management because they are already so well connected.  They also need to have the platform skills to pay attention without suffering email overload, how to grab someone’s attention, etc.  I wish I’d had figured this out much earlier.  If I had I would have groomed more communications colleagues into advocates and power members right at the start. #1 Grooming advocates vs. natural advocates I’ve just alluded to this above already. The very best advocates are those who naturally embrace your platform and automatically start to see new ways to work within it.  Those advocates seem to come out of the woodwork naturally since some of them are early adopters.  Not surprisingly, our best advocates today are those same people who were willing to come kick the tires when the community was completely empty.  Unfortunately, we didn’t get a global spread of those natural advocates.  I did ask around when we first launched for other people who might be good candidates, but didn’t push too hard as there were so many other things to get ready.  That was a mistake.  If I could get a redo I would have formally asked for people to be assigned where there were gaps and groomed them into an advocate.  Today as we find new advocates to fill the gaps, people are hesitant as the initial set has three years of practice are ahead of the curve power members; it definitely would have been easier earlier on. As fairly early adopters to corporate scale enterprise collaboration, there hasn’t been a roadmap to follow as we’ve grown Engage, which is part of the fun! It’s clear a lot of issues are more easily tackled the earlier you identify and begin to correct them, and I’ve identified the main five I wish I could redo.  In the spirit of collaboration, I hope someone else learns from my mistakes! View the original article by Jem here. 

    Read the article

  • Erfolgreich sein durch Reference Selling

    - by A&C Redaktion
    Referenzen sind eine hervorragende Möglichkeit, die Zuverlässigkeit von Partner-Lösungen auf Basis von Oracle Technologien darzustellen, denn sie sind ein Spiegelbild zufriedener Kunden. Sie dienen als Best Practices und beeinflussen damit positiv die Kaufentscheidung neuer Kunden. Iris Musiol, Customer Reference Manager DACH, erklärt das Oracle Referenzprogramm für Partner sowie deren Vorteile, Inhalte und Voraussetzungen.

    Read the article

  • Multicast hostname lookups on OSX

    - by KARASZI István
    I have a problem with hostname lookups on my OSX computer. According to Apple's HK3473 document it says for v10.6: Host names that contain only one label in addition to local, for example "My-Computer.local", are resolved using Multicast DNS (Bonjour) by default. Host names that contain two or more labels in addition to local, for example "server.domain.local", are resolved using a DNS server by default. Which is not true as my testing. If I try to open a connection on my local computer to a remote port: telnet example.domain.local 22 then it will lookup the IP address with multicast DNS next to the A and AAAA lookups. This causes a two seconds lookup timeout on every lookup. Which is a lot! When I try with IPv4 only then it won't use the multicast queries to fetch the remote address just the simple A queries. telnet -4 example.domain.local 22 When I try with IPv6 only: telnet -6 example.domain.local 22 then it will lookup with multicast DNS and AAAA again, and the 2 seconds timeout delay occurs again. I've tried to create a resolver entry to my /etc/resolver/domain.local, and /etc/resolver/local.1, but none of them was working. Is there any way to disable this multicast lookups for the "two or more label addition to local" domains, or simply disable it for the selected subdomain (domain.local)? Thank you! Update #1 Thanks @mralexgray for the scutil --dns command, now I can see my domain in the list, but it's late in the order: DNS configuration resolver #1 domain : adverticum.lan nameserver[0] : 192.168.1.1 order : 200000 resolver #2 domain : local options : mdns timeout : 2 order : 300000 resolver #3 domain : 254.169.in-addr.arpa options : mdns timeout : 2 order : 300200 resolver #4 domain : 8.e.f.ip6.arpa options : mdns timeout : 2 order : 300400 resolver #5 domain : 9.e.f.ip6.arpa options : mdns timeout : 2 order : 300600 resolver #6 domain : a.e.f.ip6.arpa options : mdns timeout : 2 order : 300800 resolver #7 domain : b.e.f.ip6.arpa options : mdns timeout : 2 order : 301000 resolver #8 domain : domain.local nameserver[0] : 192.168.1.1 order : 200001 Maybe it would work if I could move the resolver #8 to the position #2. Update #2 No probably won't work because the local DNS server on 192.168.1.1 answering for domain.local requests and it's before the mDNS (resolver #2). Update #3 I could decrease the mDNS timeout in /System/Library/SystemConfiguration/IPMonitor.bundle/Contents/Info.plist file, which speeds up the lookups a little, but this is not the solution.

    Read the article

  • How to connect SharePoint Online with Dynamics CRM Online using BDC?

    - by ripperus
    I try to connect SharePoint Online with Dynamics CRM Online using BDC. But without any results. I'll try to using Account's from CRM in SharePoint Online like a list. I mean - when I have 100 account's (customers) in CRM I want to export this account's to SharePoint Online like a list. And when I will bed edited account in CRM the elements in the list will be update (and when I edited element on SharePoint list it will update in CRM). Is there any possibility to connect in this way? If if what I should use - SharePoint Designer 2010, Visual Studio or do this on web interface?

    Read the article

< Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >