Search Results

Search found 26454 results on 1059 pages for 'post parameter'.

Page 434/1059 | < Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >

  • 3d vertex translated onto 2d viewport

    - by Dan Leidal
    I have a spherical world defined by simple trigonometric functions to create triangles that are relatively similar in size and shape throughout. What I want to be able to do is use mouse input to target a range of vertices in the area around the mouse click in order to manipulate these vertices in real time. I read a post on this forum regarding translating 3d world coordinates into the 2d viewport.. it recommended that you should multiply the world vector coordinates by the viewport and then the projection, but they didn't include any code examples, and suffice to say i couldn't get any good results. Further information.. I am using a lookat method for the viewport. Does this cause a problem, and if so is there a solution? If this isn't the problem, does anyone have a simple code example illustrating translating one vertex in a 3d world into a 2d viewspace? I am using XNA.

    Read the article

  • OWB 11gR2 &ndash; Parallel DML and Query

    - by David Allan
    A quick post illustrating conventional (non direct path) parallel inserts and query using OWB following on from some recent posts from Jean-Pierre and Randolf on this topic. The mapping configuration properties is where you can define these hints in OWB, taking JP’s simplistic illustration, the parallel query hints in OWB are defined on the ‘Extraction hint’ property for the source, and the parallel DML hints are defined on the ‘Loading hint’ property on the target table operator. If we then generate the code you can see the intermediate code generated below… Finally…remember the parallel enabled session for this all to fly… Anyway, hope this helps join a few dots….

    Read the article

  • Multiple 301 redirects, do search engines/viewers see them all?

    - by Karim
    I've put in place lots of different 301 rules to deal with numerous url changes. And for certain URLS there are 3-4 different 301 redirects landing the visitors to the new URL. I heard that 301 loses pagerank/linkjuice. ALl the 301 are onsite for the same domain. With a mix of php 301s and htaccess 301s. so for instance articles/news.php?id=2 --- articles/blog.php?id=2 [filename change] articles/* --- /* [subdir to root] /blog.php?id=2 --- /title-of-post [mod rewrite url change] so if you were to visit /articles/news.php?id=2 there will be two 301 redirects until you land on the /yellow-wellington-boots/, my question is does google see the intermediate redirects, or just the final page the 301's redirect to.

    Read the article

  • RewriteRule not working at server level?

    - by Alexis Wilke
    I wanted to forbid some robots from doing certain things to my websites and decided to add a RewriteRule for that purpose. The rule works when put in one of my <VirtualHost *:80> tag and looks like this: RewriteEngine On RewriteCond %{HTTP_USER_AGENT} libwww-perl RewriteCond %{REQUEST_METHOD} POST RewriteRule . - [F,L] However, I wanted to apply that to all my websites instead of just one of them. So with the newest version of Apache2 settings, I decided to put that code in the security.conf file. This file is defined under /etc/apache2/conf-available/... (and yes, I have a softlink from the /etc/apache2/conf-enabled/... directory.) However, if the definition is only in the conf-available/security.conf files, it somehow gets ignored. From the documentation, it says that these Rewrite* commands all work at server level! Any idea of what I would be missing?

    Read the article

  • sysctl.conf ignore net settings

    - by Steffen Unland
    I have a little problem with sysctl on a Ubuntu 10.04 LTS system. When I set the sysctl values with "sysctl -w " all work fine, but when I try to use the sysctl.conf file. the net settings will be ignored. For example my sysctl.conf # /etc/sysctl.conf - Configuration file for setting system variables kernel.domainname=findme.sysctl # Corefiles information fs.suid_dumpable=2 kernel.core_pattern=/cores/core-%e-%s-%u-%g-%p-%t ##############################################################3 # Functions previously found in netbase net.ipv4.netfilter.ip_conntrack_tcp_timeout_fin_wait=1 net.ipv4.netfilter.ip_conntrack_tcp_timeout_close_wait=1 when I grep to the values, I can see that the sysctl settings for net.ipv4.netfilter don't set. [host:~ ] $ sysctl -a | grep domainname kernel.domainname = findme.sysctl [host:~ ] $ sysctl -a | grep "core_pattern" kernel.core_pattern = /cores/core-%e-%s-%u-%g-%p-%t [host:~ ] $ sysctl -a | grep "timeout_fin_wait" net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120 net.ipv4.netfilter.ip_conntrack_tcp_timeout_fin_wait = 120 [host:~ ] $ sysctl -a | grep "timeout_close_wait" net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60 net.ipv4.netfilter.ip_conntrack_tcp_timeout_close_wait = 60 can somebody help me to solve the problem? If you need more information I can post it. Cheers, Steffen

    Read the article

  • Interesting/Innovative Open Source tools for indie games [closed]

    - by Gastón
    Just out of curiosity, I want to know opensource tools or projects that can add some interesting features to indie games, preferably those that could only be found on big-budget games. EDIT: As suggested by The Communist Duck and Joe Wreschnig, I'm putting the examples as answers. EDIT 2: Please do not post tools like PyGame, Inkscape, Gimp, Audacity, Slick2D, Phys2D, Blender (except for interesting plugins) and the like. I know they are great tools/libraries and some would argue essential to develop good games, but I'm looking for more rare projects. Could be something really specific or niche, like generating realistic trees and plants, or realistic AI for animals.

    Read the article

  • First Foray&ndash;About timeout

    - by SQLMonger
    It has been quite a while since I signed up for this blog site and high time that something was posted.  I have a list of topics that I will be working through and posting.  Some I am sure will have been posted by others, but I will be sticking to the technical problems and challenges that I’ve recently faced, and the solutions that worked for me.  My motto when learning something new has always been “My kingdom for an example!”, and I plan on delivering useful examples here so others can learn from my efforts, failures and successes.   A bit of background about me… My name is Clayton Groom. I am a founding partner of a consulting firm in St. Louis Missouri, Covenant Technology Partners, LLC and focus on SQL Server Data Warehouse design, Analysis Services and Enterprise Reporting solutions.  I have been working with SQL Server since the early nineties, when it still only ran on OS/2. I love solving puzzles and technical challenges.   Enough about me… On to a real problem… SSIS Connection Time outs versus Command Time outs Last week, I was working on automating the processing for a large Analysis Services cube.  I had reworked an SSIS package and script task originally posted by Vidas Matelis that automates the process of adding new and dropping old partitions to/from an Analysis Services cube.  I had the package working great, tested, and ready for deployment.  It basically performs a query against the source system to determine if there is new data in the warehouse that will require a new partition to be added to the cube, and it checks the cube to see if there are any partitions that are present that are no longer needed in a rolling 60 month window. My client uses Tivoli for running all their production jobs, and not SQL Agent, so I had to build a command line file for Tivoli to use to run the package. Everything was going great. I had tested the command file from my development workstation using an XML configuration file to pass in server-specific parameters into the package when executed using the DTExec utility. With all the pieces ready, I updated the dtsconfig file to point to the UAT environment and started working with the Tivoli developer to test the job.  On the first run, the job failed, and from what I could see in the SSIS log, it had failed because of a timeout. Other errors in the log made me think that perhaps the connection string had not been passed into the package correctly. We bumped the Connection Manager  timeout values from 20 seconds to 120 seconds and tried again. The job still failed. After changing the command line to use the /SET option instead of the /CONFIGFILE option, we tested again, and again failure. After a number more failed attempts, and getting the Teradata DBA involved to monitor and see if we were connecting and failing or just failing to connect, we determined that the job was indeed connecting to the server and then disconnecting itself after 30 seconds.  This seemed odd, as we had the timeout values for the connection manager set to 180 seconds by then.  At this point one of the DBA’s found a post on the Teradata forum that had the clues to the puzzle: There is a separate “CommandTimeout” custom property on the Data source object that may needed to be adjusted for longer running queries.  I opened up the SSIS package, opened the data flow task that generated the partition list table and right-clicked on the data source. from the context menu, I selected “Show Advanced Editor” and found the property. Sure enough, it was set to 30 seconds. The CommandTimeout property can also be edited in the SSIS Properties sheet. In order to determine how long the timeout needed to be, I ran the query from the task in the development environment and received a response in a matter of seconds.  I then tried the same query against the production database and waited several minutes for a response. This did not seem to be a reasonable response time for the query involved, and indeed it wasn’t. The Teradata DBA’s adjusted the query governor settings for the service account I was testing with, and we were able to get the response back down under a minute.  Still, I set the CommandTimeout property to a much higher value in case the job was ever started during a time of high-demand on the production server. With this change in place, the job finally completed successfully.  The lesson learned for me was two-fold: Always compare query execution times between development and production environments, and don’t assume that production will always be faster.  With higher user demands, query governors, and a whole lot more data, the execution time of even what might seem to be simple queries can vary greatly. SSIS Connection time out settings do not affect command time outs.  Connection timeouts control how long the package will wait for a response from the server before assuming the server is not available or is not responding. Command time outs control how long a task will wait for results to start being returned before deciding that the server is not responding. Both lessons seem pretty straight forward, and I felt pretty sheepish once I finally figured out what the issue was.  To be fair though, In the 5+ years that I have been working with SSIS, I could only recall one other time where I had to set the CommandTimeout property, and that memory only resurfaced while I was penning this post.

    Read the article

  • Artificial Intelligence implemented in x86 Assembly? [closed]

    - by Bigyellow Bastion
    Okay, so I decided that for my upcoming operating system, I do basically everything in x86 Assembly, using only 16-bit mode. I will need to write the software to host on it once I have something up and going, and I'll definitely post the source and VM-executable file. But as for now I'm stuck with the idea of implementing the AI code for some of the games I'm making to host on it. AI in Assembly is tedious, and sometimes almost impossible seeming, especially complex AI(I'm talking SNES Super Mario World 2: Yoshi's Island AI here, by the way, not pong AI). I was thinking that it'd be such a hassle that I'd have to bring a higher-level language to work some of this out here, like maybe C++ or C#, but I'd have to go through more work linking it into a fine binary that my OS will host, and that adds unnecessary work to the table I wanted to avoid(I don't want a complex system, I want everything as bare-bones as possible, avoiding libraries, APIs, and linkable formats for now, to make everything more directly accessible to the kernel's API).

    Read the article

  • SEO. dofollow trackback or nofollow trackback?

    - by Ernesto Marrero
    Thinking about SEO. It is better that they are dofollow trackback or nofollow trackback? When I write a post on my blog and automatically sends trackback to my site from http://bitacoras.com. The above link is dofollow. Is it advisable to give relevance to this link I also handing dofollow link? For example any domain. on the homepage has pagerank 3. This site has a pagerank 0 internal page that I send trackbacks. It is convenient to send dofollow link to that page to increase your pagerank. Increase my pagerank if I perform this operation ?works well ? Please appreciated.any reference in writing .

    Read the article

  • Adding a custom document template to the document Library

    - by ybbest
    After you create a SharePoint document library, you can start creating document based on the default document template. If you like to add you own custom template, you can easily achieve this by creating a SharePoint solution using visual studio. In this post, I’d like to show how to add a custom document template to the SharePoint document Library. You can download the complete source code here. 1. Create Empty SharePoint solution, creating a document library called “YbbestCustomDocLib” and adding a Module with a word document template called FAX.dotx 2. Modify the Elements.xml file in the module FROM TO 3. Finally, you need to create feature receiver to configure the Document TemplateUrl property of the document library. You can download the complete source code here.

    Read the article

  • What IDEs are available for Ubuntu?

    - by Roland Taylor
    This question exists because it has historical significance, but it is not considered a good, on-topic question for this site, so please do not use it as evidence that you can ask similar questions here. See the FAQ for more information. This is a community wiki for IDEs available on Ubuntu. Please post one IDE per answer (including more than just a screenshot or a link, please at least put a short description). In your answer, tell us what the IDE is for (which language(s) or if it is RAD capable).

    Read the article

  • IIS Not Accepting Login Credentials

    - by Dale Jay
    I have an ASP.NET web form using Microsoft's boilerplate Active Directory login page, set up exactly as suggested. (See http://msdn.microsoft.com/en-us/library/ms180890%28v=vs.80%29.aspx) Windows Authentication is activated on the "Default Website" and "MyWebsite" levels, and Domain\This.User is given "Allow" access to the site. After entering the valid credentials for This.User on the web form, a popup window appears asking me to enter my credentials yet again. Despite entering valid credentials for This.User (after attempting Domain\This.User and This.User formats), it rejects the credentials and returns an unauthorized user page. Active Directory user This.User is valid, the IP address of the AD server has been verified and SPN's have been set up for the server. Any thoughts as to what may be causing this? I can post code if needed.

    Read the article

  • Why is prefixing column names considered bad practice?

    - by P.Brian.Mackey
    According to a popular SO post is it considered a bad practice to prefix table names. At my company every column is prefixed by a table name. This is difficult for me to read. I'm not sure the reason, but this naming is actually the company standard. I can't stand the naming convention, but I have no documentation to back up my reasoning. All I know is that reading AdventureWorks is much simpler. In this our company DB you will see a table, Person and it might have column name: Person_First_Name or maybe even Person_Person_First_Name (don't ask me why you see person 2x) Why is it considered a bad practice to pre-fix column names? Are underscores considered evil in SQL as well? Note: I own Pro SQL Server 2008 - Relation Database design and implementation. References to that book are welcome.

    Read the article

  • Exalogic Echo release available by Qualogy

    - by JuergenKress
    Just a quick post : there are new Exalogic goodies on Oracle E-Delivery today. Something we have been eagerly waiting for since the spring. Exalogic Stack version 2.0.6.0.0 (dubbed “Echo release”) is now available for download on E-Delivery (as demonstrated in the screenprints below). This is the third release of the “Exalogic virtual datacenter stack”. When more information is published about feature additions, improvements and bugfixes in this new release of the Exalogic virtual datacenter I will let you know! Read the full article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Exalogic,Qualogy,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Rules of Holes #5: Seek Help to Get Out of the Hole

    - by ArnieRowland
    You are moving along, doing good work, maintaining a steady pace. All seems to be going well for you. Then BAM!, a Hole just grabbed you. How the heck did that happen? What went wrong? How did you fall into a Hole? Definitely, you will want to do a post-mortem and try to tease out what misteps led you into the Hole. Certainly you will want to use this opportunity to enhance your Hole avoidance skills. But your first priority is to get out of this Hole right NOW.. Consider the Fifth Rule of Holes...(read more)

    Read the article

  • NVIDIA 560 TI driver install ubuntu 14.04 leads to "missing on display" error

    - by allthosemiles
    Currently on my Ubuntu 14.04 install, I boot it up install the latest updates and attempt to install the NVIDIA drivers from xorg-edgers while following the top answer on this post Installing Nvidia Drivers It installs 304 for my card and when I check the "glxinfo | grep OpenGL" I have about 8 lines that read Xlib: extension "NV-GLX" missing on display ":0". I did the "sudo apt-get install nvidia-current" to get the latest. I install it but didn't see any errors in the terminal. I'm not sure what I'm doing wrong here. I'm not a complete beginner with Linux so I can find my way around just fine but I can't find a solution to this issue. Should I have done one of these two instead? sudo apt-get install nvidia-304 sudo apt-get install nvidia-graphics-drivers-304

    Read the article

  • Build 2012, some thoughts..

    - by Dennis Vroegop
    I think you probably read my rant about the logistics at Build 2012, as posted here, so I am not going into that anymore. Instead, let’s look at the content. (BTW If you did read that post and want some more info then read Nia Angelina’s post about Build. I have nothing to add to that.) As usual, there were good speakers and some speakers who could benefit from some speaker training. I find it hard to understand why Microsoft allows certain people on stage, people who speak English with such strong accents it’s hard for people, especially from abroad, to understand. Some basic training might be useful for some of them. However, it is nice to see that most speakers are project managers, program managers or even devs on the teams that build the stuff they talk about: there was a lot of knowledge on stage! And that means when you ask questions you get very relevant information. I realize I am not the average audience member here, I am regular speaker myself so I tend to look for other things when I am in a room than most audience members so my opinion might differ from others. All in all the knowledge of the speakers was above average but the presentation skills were most of the times below what I would describe as adequate. But let us look at the contents. Since the official name of the conference is Build Windows 2012 it is not surprising most of the talks were focused on building Windows 8 apps. Next to that, there was a lot of focus on Azure and of course Windows Phone 8 that launched the day before Build started. Most sessions dealt with C# and JavaScript although I did see a tendency to use C++ more. Touch. Well, that was the focus on a lot of sessions, that goes without saying. Microsoft is really betting on Touch these days and being a Touch oriented developer I can only applaud this. The term NUI is getting a bit outdated but the principles behind it certainly aren’t. The sessions did cover quite a lot on how to make your applications easy to use and easy to understand. However, not all is touch nowadays; still the majority of people use keyboard and mouse to interact with their machines (or, as I do, use keyboard, mouse AND touch at the same time). Microsoft understands this and has spend some serious thoughts on this as well. It was all about making your apps run everywhere on all sorts of devices and in all sorts of scenarios. I have seen a couple of sessions focusing on the portable class library and on sharing code between Windows 8 and Windows Phone 8. You get the feeling Microsoft is enabling us devs to write software that will be ubiquitous. They want your stuff to be all over the place and they do anything they can to help. To achieve that goal they provide us with brilliant SDK’s, great tooling, a very, very good backend in the form of Windows Azure (I was particularly impressed by the Mobility part of Azure) and some fantastic hardware. And speaking of hardware: the partners such as Acer, Lenovo and Dell are making hardware that puts Apple to a shame nowadays. To illustrate: in Bellevue (very close to Redmond where Microsoft HQ is) they have the Microsoft Store located very close to the Apple Store, so it’s easy to compare devices. And I have to say: the Microsoft offerings are much, much more appealing that what the Cupertino guys have to offer. That was very visible by the number of people visiting the stores: even on the day that Apple launched the iPad Mini there were more people in the Microsoft store than in the Apple store. So, the future looks like it’s going to be fun. Great hardware (did I mention the Nokia Lumia 920? No? It’s brilliant), great software (Windows 8 is in a league of its own), the best dev tools (Visual Studio 2012 is still the champion here) and a fantastic backend (Azure.. need I say more?). It’s up to us devs to fill up the stores with applications that matches this. To summarize: it is great to be a Windows developer. PS. Did I mention Surface RT? Man….. People were drooling all over it wherever I went. It is fantastic :-) Technorati Tags: Build,Windows 8,Windows Phone,Lumia,Surface,Microsoft

    Read the article

  • More About PeopleSoft Feature Packs

    - by john.webb(at)oracle.com
    In my previous PeopleSoft Feature Pack post I introduced the new PeopleSoft Feature Pack delivery process. The response has been fantastic. It appears our customers agree that this new offering benefits them in many ways.   Since there has been so much interest in our Feature Pack strategy and since so many customers have been referencing our PeopleSoft FAQ in which we explain this new delivery mechanism, we've created the short presentation below to further explain Feature Packs.    

    Read the article

  • Creating the Business Card Request InfoPath Form

    - by JKenderdine
    Business Card Request Demo Files Back in January I spoke at SharePoint Saturday Virginia Beach about InfoPath forms and Web Part deployment.  Below is some of the information and details regarding the form I created for the session.  There are many blogs and Microsoft articles on how to create a basic form so I won’t repeat that information here.   This blog will just explain a few of the options I chose when creating the solutions for SPS Virginia Beach.  The above link contains the zipped package files of the two InfoPath forms(no code solution and coded solution), the list template for the Location list I used, and the PowerPoint deck.  If you plan to use these templates, you will need to update the forms to work within your own environments (change data connections, code links, etc.).  Also, you must have the SharePoint Enterprise version, with InfoPath Services configured in order to use the Web Browser enabled forms. So what are the requirements for this template? Business Card Request Form Template Design Plan: Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Show and hide fields as necessary for requirements Create multiple views – one for those submitting the form and Another view for the executive assistants placing the orders. Browser based form integrated into SharePoint team site Submitted directly to form library The base form was created using the blank template.  The table and rows were added using Insert tab and selecting Custom Table.  The use of tables is a great way to make sure everything lines up.  You do have to split the tables from time to time.  If you’ve ever split cells and then tried to re-align one to find that you impacted the others, you know why.  Here is what the base form looks like in InfoPath.   Show and hide fields as necessary for requirements You will notice I also used Sections within the form.  These show or hide depending on options selected or whether or not fields are blank.  This is a great way to prevent your users from feeling overwhelmed with a large form (this one wouldn’t apply).  Although not used in this one, you can also use various views with a tab interface.  I’ll show that in another post. Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Utilizing rules you can load data when the form initiates (Data tab, Form Load).  Anything you can automate is always appreciated by the user as that is data they don’t have to enter.  For example, loading their user id or other user information on load: Always keep in mind though how much data you load and the method for loading that data (through rules, code, etc.).  They have an impact on form performance.  The form will take longer to load if you bring in a ton of data from external sources.  Laura Rogers has a great blog post on using the User Information List to load user information.   If the user has logged into SharePoint, then this can be used quite effectively and without a huge performance hit.   What I have found is that using the User Profile service via code behind or the Web Service “GetUserProfileByName” (as above) can take more time to load the user data.  Just food for thought. You must add the data connection in order for the above rules to work.  You can connect to the data connection through the Data tab, Data Connections or select Manage Data Connections link which appears under the main data source.  The data connections can be SharePoint lists or libraries, SQL data tables, XML files, etc.  Create multiple views – one for those submitting the form and Another view for the executive assistants placing the orders. You can also create multiple views for the users to enhance their experience.  Once they’ve entered the information and submitted their request for business cards, they don’t really need to see the main data input screen any more.  They just need to view what they entered. From the Page Design tab, select New View and give the view a name.  To review the existing views, click the down arrow under View: The ReviewView shows just what the user needs and nothing more: Once you have everything configured, the form should be tested within a Test SharePoint environment before final deployment to production.  This validates you don’t have any rules or code that could impact the server negatively. Submitted directly to form library   You will need to know the form library that you will be submitting to when publishing the template.  Configure the Submit data connection to connect to this library.  There is already one configured in the sample,  but it will need to be updated to your environment prior to publishing. The Design template is different from the Published template.  While both have the .XSN extension, the published template contains all the “package” information for the form.  The published form is what is loaded into Central Admin, not the design template. Browser based form integrated into SharePoint team site In Central Admin, under General Settings, select Manage Form Templates.  Upload the published form template and Activate it to a site collection. Now it is available as a content type to select in the form library.  Some documentation on publishing form templates:  Technet – Manage administrator approved form templates And that’s all our base requirements.  Hope this helps to give a good start.

    Read the article

  • Looking for literature about graphics pipeline optimization

    - by zacharmarz
    I am looking for some books, articles or tutorials about graphics architecture and graphics pipeline optimizations. It shouldn't be too old (2008 or newer) - the newer, the better. I have found something in [Optimising the Graphics Pipeline, NVIDIA, Koji Ashida] - too old, [Real-time rendering, Akenine Moller], [OpenGL Bindless Extensions, NVIDIA, Jeff Bolz], [Efficient multifragment effects on graphics processing units, Louis Frederic Bavoil] and some internet discussions. But there is not too much information and I want to read more. It should contain something about application, driver, memory and shader units communication and data transfers. About vertices and attributes. Also pre and post T&L cache (if they still exist in nowadays architectures) etc. I don't need anything about textures, frame buffers and rasterization. It can also be about OpenGL (not about DirecX) and optimizing extensions (not old extensions like VBOs, but newer like vertex_buffer_unified_memory).

    Read the article

  • Using YouTube as a CDN

    - by Syed
    Why isn't YouTube used as a CDN for video and audio files? Through YouTube's api and developer tools, it would be possible to post all media files to YouTube from a CMS and then make a call to them when needed. This seems like it's within YouTube's TOS, it's a cost-effective way to store, retrieve, and distribute media files, and it could also make for easy monetization. I ask because I'm working on a new project for a public radio station. I can't figure out the real downside to this sort of an implementation.

    Read the article

  • I&rsquo;m back, now with Windows Live Writer goodness

    - by Dave Yasko
    I’ve reimaged my home laptop.  I’m trying to populate it with as much free goodness as possible to see if the free way is as good as the old pay way.  Turns out, I’ve got access to Windows Live Writer.  I’m not sure where that came from, maybe with Vista Ultimate.  I don’t know.  Either way, it makes my blog posting a whole lot easier.  So, maybe, just maybe, it will make me more likely to post.  We’ll see. Later.

    Read the article

  • Data-tier Applications in SQL Server 2008 R2

    - by BuckWoody
    I had the privilege of presenting to the Adelaide SQL Server User Group in Australia last evening, and I covered the Data Access Component (DAC) and the Utility Control Point (UCP) from SQL Server 2008 R2. Here are some links from that presentation:   Whitepaper: http://msdn.microsoft.com/en-us/library/ff381683.aspx Tutorials: http://msdn.microsoft.com/en-us/library/ee210554(SQL.105).aspx From Visual Studio: http://msdn.microsoft.com/en-us/library/dd193245(VS.100).aspx Restrictions and capabilities by Edition: http://msdn.microsoft.com/en-us/library/cc645993(SQL.105).aspx    Glen Berry's Blog entry on scripts for UCP/DAC: http://www.sqlservercentral.com/blogs/glennberry/archive/2010/05/19/sql-server-utility-script-from-24-hours-of-pass.aspx    Objects supported by a DAC: http://msdn.microsoft.com/en-us/library/ee210549(SQL.105).aspx   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • When is a Use Case layer needed?

    - by Meta-Knight
    In his blog post The Clean Architecture Uncle Bob suggests a 4-layer architecture. I understand the separation between business rules, interfaces and infrastructure, but I wonder if/when it's necessary to have separate layers for domain objects and use cases. What added value will it bring, compared to just having the uses cases as "domain services" in the domain layer? The only useful info I've found on the web about a use case layer is an article by Martin Fowler, who seems to contradict Uncle Bob about its necessity: At some point I may run into the problems, and then I'll make a Use Case Controller - but only then. And even when I do that I rarely consider the Use Case Controllers to occupy a separate layer in the system architecture. Edit: I stumbled upon a video of Uncle Bob's Architecture: The Lost Years keynote, in which he explains this architecture in depth. Very informative.

    Read the article

  • Opening images sorted by modification date/size/type/etc

    - by menino bolinho
    Suppose I have a folder with pictures in them. If I sort them by name once I open one with Image Viewer and navigate to the others the order is respected. But if I sort my files by modification date, for example, I can't do that. Basically, the default Image Viewer only lets you navigate images by name. According to this post on the ubuntuforums this has been an issue since 2007! Is there a good/easy way to fix it? Seems like such a trivial thing to me.

    Read the article

< Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >