Search Results

Search found 28279 results on 1132 pages for 'syntax case'.

Page 385/1132 | < Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >

  • The SQL Server Community

    - by AllenMWhite
    In case you weren't aware of it, I absolutely love the SQL Server community. The people I've gotten to know have amazing knowledge, and they love sharing that knowledge with anyone who wants to learn. How can you not love that? It's inspiring and humbling all at the same time. There are a number of venues where the SQL Server community comes together. I'm including Twitter , the PASS Summit , the various SQL Saturday events, SQLBits , Tech Ed , and the local user groups. Each of us takes part in...(read more)

    Read the article

  • simple collision detection

    - by Rob
    Imagine 2 squares sitting side by side, both level with the ground: http://img19.imageshack.us/img19/8085/sqaures2.jpg A simple way to detect if one is hitting the other is to compare the location of each side. They are touching if ALL of the following are NOT true: The right square's left side is to the right of the left square's right side. The right square's right side is to the left of the left square's left side. The right square's bottom side is above the left square's top side. The right square's top side is below the left square's bottom side. If any of those are true, the squares are not touching. If all of those are false, the squares are touching. But consider a case like this, where one square is at a 45 degree angle: http://img189.imageshack.us/img189/4236/squaresb.jpg Is there an equally simple way to determine if those squares are touching?

    Read the article

  • The practical cost of swapping effects

    - by sebf
    Hello, I use XNA for my projects and on those forums I sometimes see references to the fact that swapping an effect for a mesh has a relatively high cost, which surprises me as I thought to swap an effect was simply a case of copying the replacement shader program to the GPU along with appropriate parameters. I wondered if someone could explain exactly what is costly about this process? And put, if possible, 'relatively' into context? For example say I wanted to use a short shader to help with picking, I would: Change the effect on every object, calculting a unique color to identify it and providing it to the shader. Draw all the objects to a render target in memory. Get the color from the target and use it to look up the selected object. What portion of the total time taken to complete that process would be spent swapping the shaders? My instincts would say that rendering the scene again, no matter how simple the shader, would be an order of magnitude slower than any other part of the process so why all the concern over effects?

    Read the article

  • What is "Unlock keyring" and how do I get rid of it?

    - by cipricus
    (In my case this message never appeared before installing Ubuntu One - see this question). Can I use Ubuntu One and avoid being prompted each time like this? I use Lubuntu 12.04. Edit: After exchanging comments I add the supplementary info: I am asked for password and to select session if I log out and in (auto-login is NOT set). Ubuntu One is installed but set NOT to start with session. Keyring appears nonetheless. Before installing Ubuntu One this didn't happen. Also, following the advice of con-f-use, I have entered ps -A | grep -i ubunt[u] in terminal, and got 2346 ? 00:00:01 ubuntuone-syncd, which means that ubuntuone is running when it was not supposed to.

    Read the article

  • The provider did not return a ProviderManifestToken string Entity Framework

    - by PearlFactory
    Moved from Home to work and went to fire up my project and after long pause "The provider did not return a ProviderManifestToken string" or even More Abscure ProviderIncompatable Exception Now after 20 mins of chasing my tail re different ver of EntityFramework 4.1 vs 4.2...blahblahblah Look inside at the inner exception A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible DOH!!!! Or a clean translation is that it cant find SQL or is offline or not running. SO check the power is on/Service running or as in my case Edit web.config & change back to Work SQL box   Hope you dont have this pain as the default errors @ the moment suck balls in the EntityFramework 4.XX releases   Cheers

    Read the article

  • How can I create a symlink to the location that Ubuntu 10.10 mounts a CD?

    - by Michael Curran
    In Ubuntu 10.10, when I insert a CD or DVD into my optical drive, the system mounts the CD in a folder called /media/XYZ where XYZ is the disk's label. This has cause problems with Wine, as in order for an application to verify that an application's CD is present, Wine uses a symlinks to point to a mounted CD's folder. In this case, that folder must be /media/XYZ, but when using a different application, the folder would be different. I would like to know if there is a way to create a symlink that will always point to the mounted folder from a given /dev/cdrom* device, or how to force the system to always mount CDs to the same address (i.e. /media/cdrom).

    Read the article

  • Statements of direction for EPM 11.1.1.x series products

    - by THE
    Some of the older parts of EPM that have been replaced with newer software will phase out after January 2013. For most of these the 11.1.1.x Series will be the last release. They will then only be supported via sustaining support (see policy). We have notes about: the Essbase Excel Add In (replaced by SmartView which nearly achieved functionality parity with release 11.1.2.1.102) Oracle Essbase Spreadsheet Add-in Statement of Direction (Doc ID 1466700.1) Hyperion Data Integration Management (replaced by Oracle Data Integrator ( ODI )) Hyperion Data Integration Management Statement of Direction (Doc ID 1267051.1) Hyperion Enterprise and Enterprise Reporting (replaced by HFM) Hyperion Enterprise and Hyperion Enterprise Reporting Statement of Direction (Doc ID 1396504.1) Hyperion Business Rules (replaced by Calculation Manager) Hyperion Business Rules Statement of Direction (Doc ID 1448421.1) Oracle Visual Explorer (this one phased out in June 11 already - just in case anyone missed it) Oracle Essbase Visual Explorer Statement of Direction (Doc ID 1327945.1) For a complete list of the Supported Lifetimes, please review the "Oracle Lifetime Support Policy for Applications"

    Read the article

  • Should I just always Convert.ToInt32 my integers to account for potential nullable integers?

    - by Rowan Freeman
    If my MSSQL database contains a data type that is NULL (i.e. null is allowed) then ORMs, such as EntityFramework in my case, create .NET objects that are nullable. This is great, and the way I use nullables is like this: C# int? someInt = 5; int newInt = someInt.Value; // woot VB.NET Dim someInt As Integer? Dim newInt As Integer = someInt.Value ' hooray However, recently I had to make a change to the database to make an Id field no longer NULL (nullable). This means that .Value is now broken. This is a nuisance if the Id property is used a lot. One solution that I thought of is to just use Convert.ToInt32 on Id fields so it doesn't matter if an int is nullable or not. C# int newInt = Convert.ToInt32(someInt); // always compiles VB.NET Dim newInt As Integer = Convert.ToInt32(someInt) ' always compiles Is this a bad approach and are there any alternatives?

    Read the article

  • Should I use structure from a core library graphic toolkit in my domain?

    - by Laurent Bourgault-Roy
    In java (and many other programming language), there are often structure to deal with graphic element : Colour, Shape, etc. Those are most often in a UI toolkit and thus have a relatively strong coupling with UI element. Now, in the domain of my application, we often deal with colour, shape, etc, to display statistic information on an element. Right now all we do with it is display/save those element with little or no behaviour. Would it make sense to avoid "reinventing the wheel" and directly use the structures in java.awt.* or should I make my own element and avoid a coupling to this toolkit? Its not like those element are going away anytime soon (they are part of the core java library after all), but at the same time it feel weird to import java.awt.* server side. I have no problem using java.util.List everywhere. Should I feel different about those class? What would be the "recommended" practice in that case?

    Read the article

  • How would you code an AI engine to allow communication in any programming language?

    - by Tokyo Dan
    I developed a two-player iPhone board game. Computer players (AI) can either be local (in the game code) or remote running on a server. In the 2nd case, both client and server code are coded in Lua. On the server the actual AI code is separate from the TCP socket code and coroutine code (which spawns a separate instance of AI for each connecting client). I want to be able to further isolate the AI code so that that part can be a module coded by anyone in their language of choice. How can I do this? What tecniques/technology would enable communication between the Lua TCP socket/coroutine code and the AI module?

    Read the article

  • MPL with Commercial Use Restrictions (And Other Questions)

    - by PythEch
    So basicly I want to use MPL 2.0 for my open source software but I also want to forbid commercial use. I'm not a legal expert, that's why I'm asking. Should I use dual-license (MPL + Modified BSD License)? Or what does sublicensing mean? If I wanted to license my project, I would include a notice to the header. What should I do if want to dual-license or sublicense? Also, is it OK to use nicknames as copyright owners? I am not able to distribute additional files (e.g LICENSE.txt, README.md etc) with the software simply because it is just a JS script. By open-source I mean not obfuscated JS code. So in this case, am I forced to use GPL to make redistribution of obfuscated work illegal? Thanks for reading, any help is appreciated, answering all of the questions is not essential.

    Read the article

  • New blog post shows immediately in google search results where as other HTML content takes time, why?

    - by Jayapal Chandran
    I have a blog which has been active for 3 years. Recently I posted an article and it immediately appeared in google search. Maybe 5 to 10 minutes. A point to note is I was logged into my google account. Maybe google checked my post's when I searched since I am logged in? Yet I logged out and used another browser and searched again with that specific text and it appeared in google search result. How did this happen? However, if I make an article in static HTML and publish, it takes time. (I assume this is the case but I haven't tested much). Yet tested a few cases after updating it in my sitemap xml. How does google search work for a blog and other content?

    Read the article

  • Which ATI driver version Ubuntu 10.04 LTS used?

    - by Jack
    I have a trouble with ATI driver with Ubuntu 12.04 LTS. When I install ATI driver in Ubuntu 12.04 via "Additional Driver", then I can't shutdown Ubuntu, it showed blank (black) screen and my laptop still run. Sometimes, my screen like that: http://www.youtube.com/watch?v=U9_iygesbBM But when I install Ubuntu 10.04 and install ATI driver via "Additional Driver", it's very good and no trouble I've seen. It's sweet but 10.04 is old and is supported to 4/2013. So I want to know why Ubuntu 10.04 works good better than 12.04 with ATI driver (in my case)?

    Read the article

  • CSS specificity: Why isn't CSS specificity weight of 10 or more class selectors greater than 1 id selector? [migrated]

    - by ajc
    While going through the css specificity concept, I understood the fact that it is calculated as a 4 parts 1) inline (1000) 2) id (100) 3) class (10) 4) html elments (1) CSS with the highest rule will be applied to the corresponding element. I tried the following example Created more than 10 classes <div class="a1"> .... <div class="a13" id="id1"> TEXT COLOR </div> ... </div> and the css as .a1 .a2 .a3 .a4 .a5 .a6 .a7 .a8 .a9 .a10 .a11 .a12 .a13 { color : red; } #id1 { color: blue; } Now, even though in this case there are 13 classes the weight is 130. Which is greater than the id. Result - JSFiddle CSS specificity

    Read the article

  • Minimum to install for a visual web browser in Ubuntu Server

    - by Svish
    I have set up a machine with Ubuntu Server. Some of the server software I want to run on it has web based user interfaces for setting it up et cetera. I know I could connect to it from a different machine which has a graphical user interface, but in this case I would rather do it on the box. So, from a fresh Ubuntu Server installation, what is the minimum I need to install to be able to launch a web browser I can use for this? For example chromium, firefox or arora.

    Read the article

  • Ubuntu gone out of order - Wont install any packages. What do I do?

    - by Aborted
    Lately, I've been getting some strange behaviour from Ubuntu. First and the most important is that it wont install updates. It gives a package installation error and it simply wont work. Earlier I tried to install TeamViewer via the Software Center, but got the same package error. I also feel like the connection speed is going slower than it should - don't know if this one is relevant to this case. What's wrong with my installation? How do I fix these package installation errors?

    Read the article

  • Optimal Database design regarding functionality of letting user share posts by other users

    - by codecool
    I want to implement functionality which let user share posts by other users similar to what Facebook and Google+ share button and twitter retweet. There are 2 choices: 1) I create duplicate copy of the post and have a column which keeps track of the original post id and makes clear this is a shared post. 2) I have a separate table shared post where I save the post id which is a foreign key to post id in post table. Talking in terms of programming basically I keep pointer to the original post in a separate table and when need to get post posted by user and also shared ones I do a left join on post and shared post table Post(post_id(PK), post_content, posted_by) SharedPost(post_id(FK to Post.post_id), sharing_user, sharedfrom(in case someone shares from non owners profile)) I am in favour of second choice but wanted to know the advice of experts out there? One thing more posts on my webapp will be more on the lines of facebook size not tweet size.

    Read the article

  • Show line breaks in asp:label inside gridview

    - by Vipin
    To show line breaks in asp:label element or for that matter inside Gridview, do the following  in case of Mandatory/ Nullable fields. <ItemTemplate>          <%# ((string)Eval("Details")).Replace("\n", "<br/>") %>  </ItemTemplate>    <ItemTemplate>          <%# FormatString(Eval("Details"))  %>  </ItemTemplate>   In code behind, add the following FormatString function - protected string FormatString(string strHelpMessage) { string rtnString = string.Empty; if (!string.IsNullOrEmpty(strHelpMessage)) rtnString = strHelpMessage.Replace(Environment.NewLine, "<br/>"); return rtnString; }

    Read the article

  • Sass interface in HTML6 for upload files.

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/04/sass-interface-in-html6-for-upload-files.aspx[This post is about experiment & imagination] From Windows XP (ever last OS I tried) I have seen a feature that is about send file to pen drive and make shortcut on Desktop. In XP, Win7 (Win8 have this too, not removed) just select the file right click > send to and you can send this file to many places. In my menu it’s show me Skype because I have installed it. this skype confirm that we can add our own app here to make it more easy for user to send file in our app. Nowadays Many people use Cloud or online site to store the file. In case of html5 drag and drop you need to have site opened and have opened that page which is about file upload. You need to select all  and drag and drop. after drag and drop file is simply uploaded to server and site show you on list (if no error happen). but this file upload is seriously not worthy since I have to open the site when I do this operation.   Through this post I want to describe a feature that can make this thing better.  This API is simply called SASS FILE UPLOAD API Through This API when you surf the site and come into file upload page then the page will tell you that we also have SASS FILE API support. Enable it for better result.   How this work This API feature are activated on 2 basis. 1. Feature are disabled by default on site (or you can change it if it’s not) 2. This API allow specific site to upload the files. Files upload may have some rule. For example (minimum or maximum size of file to uploaded, which format the site allowed you to upload). In case of resume site you will be allowed to use .doc (according to code of site)   How browser recognize that Site have SASS service. In HTML source of  the site, the code have a meta tag similar to this <meta name=”sass-upload-api” path=”/upload.json”/> Remember that upload.json is one file that has define the value of many settings {   "cookie_name": "ck_file",   "maximum_allowed_perday": 24,   "allowed_file_extensions","*.png,*.jpg,*.jpeg,*.gif",   "method": [       {           "get": "file/get",           "routing":"/file/get/{fileName}"       },       {           "post": "file/post",           "routing":"/file/post/{fileName}"       },       {           "delete": "file/delete",           "routing":"/file/delete/{fileName}"       },         {           "put": "file/put",           "routing":"/file/put/{fileName}"       },        {           "all": "file/all",           "routing":"/file/all/{fileName}"       }    ] } cookie name is simply a cookie which should be stored in browser and define in json. we define the cookie_name so we can easily share then with service in Windows system. This cookie will be accessible with the service so it’s security based safe. other cookie will not be shared.   The cookie will be post,put, get from this location. The all location will be simply about showing a whole list of file. This will gave a treeview kind of json to show the directories on sever.   for example example.com if you have activated the API with this site then you will seen a send to option in your explorer.exe when you send you will got a windows open which folder you want to use to send the file. The windows will also describe the limit and how much you can upload. This thing never required site to opened. When you upload the file this will be uploaded through FTP protocol. FTP protocol are better for performance.   How this API make thing faster. Suppose you want to ask a question and want to post image. you just do it and get it ready when you open stackoverflow.com now stackoverflow will only tell you which file you want to put on your current question that you asking for. second use is about people use cloud app.   There is no need of drag and drop anymore. we just need to do it without drag and drop it. we doesn’t need to open the site either. This thing is still in experiment level. I will update this post when I got some progress on this API.

    Read the article

  • Offline apt-get update to age of cache

    - by James Haigh
    I have a script to quickly upgrade a Live or fresh system from cached files on a flash drive. In essence, it looks like this: # *Code to remove and symlink /var/cache/apt/ if currently empty of packages.* sudo apt-get dist-upgrade # Quick offline cached upgrade; not limited by slow WANs. echo $'\nMake sure Internet is reachable and press enter for complete online upgrade.'; read sudo apt-get update sudo apt-get dist-upgrade # Complete online upgrade. The problem is that the ‘cached upgrade’ seems to ignore the cached pkgcache.bin and srcpkgcache.bin which is where I assume apt-get update stores its changes, so the upgrade completes as if the system is up-to-date. Useless. So in that case, I need some code to apt-get update to the age of the package cache on my flash drive. This code would be placed between the 1st and 2nd lines of the code above.

    Read the article

  • Would like some help in understanding rendering geometry vs textures

    - by Anon
    So I was just pondering whether it is more taxing on the GPU to render geometry or a texture. What I'm trying to see is whether there is a huge difference in rendering two scenes with the same setup: Scene 1: Example Object: A dirt road (nothing else) Geometry: Detailed road, with all the bumps, cracks and so forth done in the mesh Scene 2: Example Object: A dirt road (nothing else) Geometry: A simple mesh, in a form of a road, but in this case maps and textures are simulating cracks, bumps, etc... So of these two, which one is likely to tax the hardware more? Or is it not a like for like comparison? What would be the best way of doing something like this? Go heavy on the textures? Or have a blend of both?

    Read the article

  • No lock screen when the right click menu is open

    - by Shivram
    I just found that the lock screen on my laptop does not show up when the right click menu is open. I discovered this when I pressed the right click button accidentally while closing the lid and the laptop went to sleep, but the lock screen never showed up when I opened the lid again. I am also not able to lock the system manually(with ctrl+alt+L) when the right click menu is open. Is this by design or is it a bug? I would assume that the screen should be locked automatically when the computer goes to sleep, but this doesn't seem to be the case. Anyone have any suggestions? My specs are: Dell Vostro 1500, Ubuntu 12.10

    Read the article

  • Complete deployment to AWS

    - by Ionut
    I'm trying to deploy a Java Application to AWS free tier. I need the following: RDS provider. Using MySQL client S3 service. This is required for the Lucene Index and image uploading SES service. I need to be able to send emails to new registered users. Namecheap is my domain provider. EC2 instance Elastic Beanstalk instance. I managed to create an EC2 instance, upload the WAR file and link it to the Namecheap domain. However I find it difficult to link the other instances to the current application. I find the documentation a little messy and I can't find the right way to do this. Can you provide a simple walk through to deployment for this use case? Thanks!

    Read the article

  • Using Definition of Done to Drive Agile Maturity

    - by Dylan Smith
    I’ve been an Agile Coach at a lot of different clients over the years, and I want to share an approach I use to help them adopt and mature over time. It’s important to realize that “Agile” is not a black/white yes/no thing. Teams can be varying degrees of agile. I think of this as their agile maturity level. When I coach teams I want them to start out being a little agile, and get more agile as they mature. The approach I teach them is to use the definition of done as a technique to continuously improve their agile maturity over time. We’re probably all familiar with the concept of “Done Done” that represents what *actually* being done a feature means. Not just when a developer says he’s done right after he writes that last line of code that makes the feature kind-of work. Done Done means the coding is done, it’s been tested, installers and deployment packages have been created, user manuals have been updated, architecture docs have been updated, etc. To enable teams to internalize the concept of “Done Done”, they usually get together and come up with their Definition of Done (DoD) that defines all the activities that need to be completed before a feature is considered Done Done. The Done Done technique typically is applied only to features (aka User Stories). What I do is extend this to apply to several concepts such as User Stories, Sprints, Releases (and sometimes Check-Ins). During project kick-off I’ll usually sit down with the team and go through an exercise of creating DoD’s for each of these concepts (Stories/Sprints/Releases). We’ll usually start by just brainstorming a bunch of activities that could end up in these various DoD’s. Here’s some examples: Code Reviews StyleCop FxCop User Manuals Updated Architecture Docs Updated Tested by QA Tested by UAT Installers Created Support Knowledge Base Updated Deployment Instructions (for Ops) written Automated Unit Tests Run Automated Integration Tests Run Then we start by arranging these activities into the place they occur today (e.g. Do you do UAT testing only once per release? every sprint? every feature?). If the team was previously Waterfall most of these activities probably end up in the Release DoD. An extremely mature agile team would probably have most of these activities in the DoD for the User Stories (because an extremely mature agile team will probably do continuous deployment and release every story). So what we need to do as a team, is work to move these activities from their current home (Release DoD) down into the Sprint DoD and eventually into the User Story DoD (and maybe into the lower-level Check-In DoD if we decide to use that). We don’t have to move them all down to User Story immediately, but as a team we figure out what we think we’re capable of moving down to the Sprint cycle, and Story cycle immediately, and that becomes our starting DoD’s. Over time the team makes an effort to continue moving activities down from Release->Sprint->Story as they become more agile and more mature. I try to encourage them to envision a world in which they deploy to production as each User Story is completed. They would need to be updating User Manuals, creating installers, doing UAT testing (typical Release cycle activities) on every single User Story. They may never actually reach that point, but they should envision that, and strive to keep driving the activities down closer to the User Story cycle s they mature. This is a great technique to give a team an easy-to-follow roadmap to mature their agile practices over time. Sure there’s other aspects to maturity outside of this, but it’s a great technique, that’s easy to visualize, to drive agility into the team. Just keep moving those activities (aka “gates”) down the board from Release->Sprint->Story. I’ll try to give an example of what a recent client of mine had for their DoD’s (this is from memory, so probably not 100% accurate): Release Create/Update deployment Instructions For Ops Instructional Videos Updated Run manual regression test suite UAT Testing In this case that meant deploying to an environment shared across the enterprise that mirrored production and asking other business groups to test their own apps to ensure we didn’t break anything outside our system Sprint Deploy to UAT Environment But not necessarily actually request UAT testing occur User Guides updated Sprint Features Video Created In this case we decided to create a video each sprint showing off the progress (video version of Sprint Demo) User Story Manual Test scripts developed and run Tested by BA Deployed in shared QA environment Using automated deployment process Peer Code Review Code Check-In Compiled (warning-free) Passes StyleCop Passes FxCop Create installer packages Run Automated Tests Run Automated Integration Tests PS – One of my clients had a great question when we went through this activity. They said that if a Sprint is by definition done when the end-date rolls around (time-boxed), isn’t a DoD on a sprint meaningless – it’s done on the end-date regardless of whether those other activities are complete or not? My answer is that while that statement is true – the sprint is done regardless when the end date rolls around – if the DoD activities haven’t been completed I would consider the Sprint a failure (similar to not completing what was committed/planned – failure may be too strong a word but you get the idea). In the Retrospective that will become an agenda item to discuss and understand why we weren’t able to complete the activities we agreed would need to be completed each Sprint.

    Read the article

  • Is it time to add IPv6 access to my websites?

    - by Rob Hoare
    I have several dedicated servers and VPS servers, and some of those are at companies that have provided me with native IPv6 blocks (in addition to the IPv4 IP addresses). Does it currently make sense to point an AAAA record to an IPv6 address on my server, in addition to the A record pointing to the IPv4 address? This would be for (for example) the www subdomain. (the networking and web server software would be set up on the server to respond appropriately). A while ago I read that a small percentage of users (1 in a thousand?) would have slow or no access if a subdomain had both A and AAAA records because their networking software asked for one and got the other. Is that still the case, will adding an AAAA record inconvenience some users, or is the percentage already smaller and falling? In other words, is now the time to get around to adding native IPv6 support for a busy website aimed at the general public, or is it still too early?

    Read the article

< Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >