Search Results

Search found 2268 results on 91 pages for 'bill smith'.

Page 16/91 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Weekly Cloud Roundup 2012-15

    - by Alan Smith
    Filtering the informative, insightful and quirky from the fire hose of cloud-based hype. Irving Wladawsky-Berger provides some great insight into The Complex Transition to the Cloud, sharing his views on the slow adoption of cloud computing in organizations. “…a prediction by the research firm Gartner that while cloud computing will continue to grow at almost 20 percent a year, it will account for less than 5 percent of totally IT spending in 2015.” With a more positive mindset, Balaji Viswanathan highlights 7 Salient Trends and Directions in Cloud Computing that could be shaping the industry over the next few years. Cloud computing also looks to save energy “A small business with 100 users that moved the Microsoft applications to the cloud could cut energy use and carbon emissions by 90%. Large organizations with 10,000 users saw a 30% reduction.” More on that story here. The expansion of Windows Azure has been in the news with the announcement of “East US” and “West US” datacenters; this was covered by Visual Studio Magazine and Mary-Jo, and according to thenextweb.com Microsoft are also building $112 million data center in Wyoming. The cloud price war is still in full swing with Joe Panettieri discussing the pricing of Windows Azure and Office 365 and asking How Low Can It Go?

    Read the article

  • Utility to Script SQL Server Configuration

    - by Bill Graziano
    I wrote a small utility to script some key SQL Server configuration information. I had two goals for this utility: Assist with disaster recovery preparation Identify configuration changes I’ve released the application as open source through CodePlex. You can download it from CodePlex at the Script SQL Server Configuration project page. The application is a .NET 2.0 console application that uses SMO. It writes its output to a directory that you specify.  Disaster Planning ScriptSqlConfig generates scripts for logins, jobs and linked servers.  It writes the properties and configuration from the instance to text files. The scripts are designed so they can be run against a DR server in the case of a disaster. The properties and configuration will need to be manually compared. Each job is scripted to its own file. Each linked server is scripted to its own file. The linked servers don’t include the password if you use a SQL Server account to connect to the linked server. You’ll need to store those somewhere secure. All the logins are scripted to a single file. This file includes windows logins, SQL Server logins and any server role membership.  The SQL Server logins are scripted with the correct SID and hashed passwords. This means that when you create the login it will automatically match up to the users in the database and have the correct password. This is the only script that I programmatically generate rather than using SMO. The SQL Server configuration and properties are scripted to text files. These will need to be manually reviewed in the event of a disaster. Or you could DIFF them with the configuration on the new server. Configuration Changes These scripts and files are all designed to be checked into a version control system.  The scripts themselves don’t include any date specific information. In my environments I run this every night and check in the changes. I call the application once for each server and script each server to its own directory.  The process will delete any existing files before writing new ones. This solved the problem I had where the scripts for deleted jobs and linked servers would continue to show up.  To see any changes I just need to query the version control system to show many any changes to the files. Database Scripting Utilities that script database objects are plentiful.  CodePlex has at least a dozen of them including one I wrote years ago. The code is so easy to write it’s hard not to include that functionality. This functionality wasn’t high on my list because it’s included in a database backup.  Unless you specify the /nodb option, the utility will script out many user database objects. It will script one object per file. It will script tables, stored procedures, user-defined data types, views, triggers, table types and user-defined functions. I know there are more I need to add but haven’t gotten around it yet. If there’s something you need, please log an issue and get it added. Since it scripts one object per file these really aren’t appropriate to recreate an empty database. They are really good for checking into source control every night and then seeing what changed. I know everyone tells me all their database objects are in source control but a little extra insurance never hurts. Conclusion I hope this utility will help a few of you out there. My goal is to have it script all server objects that aren’t contained in user databases. This should help with configuration changes and especially disaster recovery.

    Read the article

  • Enter comments on queries in TraceTune

    - by Bill Graziano
    I’m trying to make TraceTune (and eventually ClearTrace) work the way I do.  My typical query tuning session goes like this: Run a trace and upload to TraceTune/ClearTrace Tune the slowest queries Goto 1 I might do this two or three times in one day and then not come back to it again for weeks or even months.  This is especially true for those clients that I only visit a few times per month.  In many cases I’ll look at a query, decide I can’t do much with it and move on.  I needed a way to capture that information. TraceTune now lets you enter a comment for a query.  It can be as simple or as complex as you like.  The comment will be shown inline with the execution history of that query. This should let you walk back through your history with a query and decide whether you should spend more time tuning it.

    Read the article

  • When is the default storage rule not really the default storage rule?

    - by Kevin Smith
    In 11g WebCenter Content (WCC) introduced dispersion rules in the vault and weblayout directory paths to better distribute content across the directories. The dispersion rule was based on dRevClassID. The only problem with this is that dRevClassID did not remain the same when you copied content from one WCC instance to another using Archiver like in a contribution-consumption scenario. This could cause problems because the web-viewable path would not be the same between the contribution and consumption instances. In the PS5 (11.1.1.6.0) release of WCC they addressed this by configuring the File Store Provider (FSP) so that all new content would use a storage rule with a dispersion rule based on dDocName, which would stay the same when content was copied to another WCC instance. To support migration from older versions of WCC they left the default storage rule unchanged and created a new storage rule called DispByContentId and made that the default storage rule for all new content. I only stumbled upon this a while back when I was trying to change the FSP configuration so that all content used a webless storage rule. I changed the default storage rule, restarted WCC, and checked in a new content item. To my surprise the new content was not created as webless. I struggled with this for a while until I noticed there were multiple storage rules defined in the FSP configuration. When I looked at the default value for the xStorageRule field in Configuration Manager, sure enough it was no longer default, but was now DispByContentId. Once I updated the DispByContentId storage rule to webless and restarted WCC all my new content was now created using the webless storage rule, just like I wanted. I noticed when I was creating this blog post that the default storage rule is also listed on the File Store Provider Information page, but I guess I didn't see that when I originally did this.

    Read the article

  • IOS Variable vs Property

    - by William Smith
    Just started diving into Objective-C and IOS development and was wondering when and the correct location I should be declaring variables/properties. The main piece of code i need explaining is below: Why and when should i be declaring variables inside the interface statement and why do they have the same variable with _ and then the same one as a property. And then in the implementation they do @synthesize tableView = _tableView (I understand what synthesize does) Thanks :-) @interface ViewController : UIViewController <UITableViewDataSource, UITableViewDelegate> { UITableView *_tableView; UIActivityIndicatorView *_activityIndicatorView; NSArray *_movies; } @property (nonatomic, retain) UITableView *tableView; @property (nonatomic, retain) UIActivityIndicatorView *activityIndicatorView; @property (nonatomic, retain) NSArray *movies;

    Read the article

  • Linux,Apache,NetBeans,PHP == Windows,IIS/Cassini,Visual Studio,ASP.Net

    - by Neil Smith
    I've worked out how to get my linux based Netbeans PHP development machine to behave much like what happens when you create a new ASP.Net project in Visual Studio. Firstly create multiple PHP project in Netbeans,say for example mysite1 and mysite2. Next edit the apache2/sites-enabled/000-default file and add two virtualhost sections as below <VirtualHost 127.0.1.1> ServerName mysite1.localhost DocumentRoot /var/www/mysite1/ </VirtualHost> <VirtualHost 127.0.2.1> ServerName mysite2.localhost DocumentRoot /var/www/mysite2/ </VirtualHost> For each site you add, pick a different ip address similar to the above where I use the third octet to increment, next edit the etc/hosts file and add the following two lines 127.0.1.1 mysite1.localhost 127.0.2.1 mysite2.localhost Then in Netbeans, go to File->Project Properties click on 'Run Configuration' and set 'Project Url' to http://mysite1.localhost for the first project and http://mysite2.localhost for the second project. That will give you a PHP development box which develops multiple PHP projects similar to how a Visual Studio Windows based box handles multiple ASP.Net sites. Hope this helps someone :)

    Read the article

  • Update to SQL Server Configuration Scripting Utility

    - by Bill Graziano
    Last spring I released a utility to script SQL Server configuration information on CodePlex.  I’ve been making small changes in this application as my needs have changed.  The application is a .NET 2.0 console application.  This utility serves two needs for me.  First it helps with disaster recovery.  All server level objects (logins, jobs, linked servers, audits) are scripted to a single file per object type.  This enables the scripts to be easily run against a DR server.  If these are checked into source control you can view the history of the script and find out what changed and when. The second goal is to capture what changed inside a database.  Objects inside a database (tables, stored procedures, views, etc.) are each scripted to their own file.  This makes it easier to track the changes to an object over time.  This does include permissions and role membership so you can capture security changes.  My assumption is that a database backup is the primary method of disaster recovery for databases so this utility is designed to capture changes to objects.  You can find the full list of changes from the original on the Downloads page on CodePlex.

    Read the article

  • What constitutes proper use of threads in programming?

    - by Smith
    I am tired of hearing people recommend that you should use only one thread per process, while many programs use up to 100 per process! take for example some common programs vb.net ide uses about 25 thread when not debugging System uses about 100 chrome uses about 19 Avira uses more than about 50 Any time I post a thread related question, I am reminded almost every time that I should not use more that one thread per process, and all the programs I mention above are ruining on my system with a single processor. What constitutes proper use of threads in programming? Please make general comment, but I'd prefer .NET framework thanks EDIT changed processor to process

    Read the article

  • Writing selenium tests, should I just get it done or get it right?

    - by Peter Smith
    I'm attempting to drive my user interface (heavy on javascript) through selenium. I've already tested the rest of my ajax interaction with selenium successfully. However, this one particular method seems to be eluding me because I can't seem to fake the correct click event. I could solve this problem by simply waiting in the test for the user to click a point and then continuing with the test but this seems like a cop out. But I'm really running out of time on my deadline to have this done and working. Should I just get this done and move on or should I spend the extra (unknown) amount of time to fix this problem and be able to have my selenium tests 100% automated?

    Read the article

  • How to handle this unfortunately non hypothetical situation with end-users?

    - by User Smith
    I work in a medium sized company but with a very small IT force. Last year (2011), I wrote an application that is very popular with a large group of end-users. We hit a deadline at the end of last year and some functionality (I will call funcA from now on) was not added into the application that was wanted at the very end. So, this application has been running in live/production since the end of 2011, I might add without issue. Yesterday, a whole group of end-users started complaining that funcA that was never in the application is no longer working. Our priority at this company is that if an application is broken it must be fixed first prior to prioritized projects. I have compared code and queries and there is no difference since 2011, which is proofA. I then was able to get one of the end-users to admit that it never worked proofB, but since then that end-user has went back and said that it was working previously......I believe the horde of end-users has assimilated her. I have also reviewed my notes for this project which has requirements and daily updates regarding the project which specifically states, "funcA not achieved due to time constraints", proofC. I have spoken with many of them and I can see where they could be confused as they are very far from a programming background, but I also know they are intelligent enough to act in a group in order to bypass project prioritization orders in order to get functionality that they want to make their job easier. The worst part is is that now group think is setting in and my boss and the head of IT is actually starting to believe them, even though there is no code or query changes. As far as reviewing the state of the logic it is very cut and dry to the point of if 1 = 1, funcA will not work. So, this is the end of the description of my scenario, but I am trying not to get severally dinged on my performance metrics due to this which would essentially have me moved to fixing a production problem that doesn't exist that will probably take over 1 month. I am looking for direct answers to this question. This question is not for rants, polling, or discussions as this is not the format for StackExchange. Please don't downvote me too terribly it is pretty common on this specific site of stack, I am looking for honest answers to this situation and I couldn't find a forum more appropriate.

    Read the article

  • How do I fix dragging windows to the adjacent workspace?

    - by Bill O'Dwyer
    I installed CompizConfig-Settings-Manager and I put on all the settings I liked and had in 11.10, including the ability to drag my windows to the adjacent workspace. It's under the Desktop Wall section, on the Edge Flipping tab and I've checked "Edge Flip Move" and "Edge Flip DnD." In 11.10, the movement was smooth between each workspace, and the window would still be "grabbed" in the same place. In 12.04, it's leaving the window behind and the mouse appears to be "grabbing" nothing, but I'm still holding onto the window, and I can still move and place it within the workspace (or indeed the previous workspace as it won't appear in the desired place until I drag the mouse all the way to the edge of the screen). Any way to fix this? I'm running 12.04 beta 2.

    Read the article

  • Using Definition of Done to Drive Agile Maturity

    - by Dylan Smith
    I’ve been an Agile Coach at a lot of different clients over the years, and I want to share an approach I use to help them adopt and mature over time. It’s important to realize that “Agile” is not a black/white yes/no thing. Teams can be varying degrees of agile. I think of this as their agile maturity level. When I coach teams I want them to start out being a little agile, and get more agile as they mature. The approach I teach them is to use the definition of done as a technique to continuously improve their agile maturity over time. We’re probably all familiar with the concept of “Done Done” that represents what *actually* being done a feature means. Not just when a developer says he’s done right after he writes that last line of code that makes the feature kind-of work. Done Done means the coding is done, it’s been tested, installers and deployment packages have been created, user manuals have been updated, architecture docs have been updated, etc. To enable teams to internalize the concept of “Done Done”, they usually get together and come up with their Definition of Done (DoD) that defines all the activities that need to be completed before a feature is considered Done Done. The Done Done technique typically is applied only to features (aka User Stories). What I do is extend this to apply to several concepts such as User Stories, Sprints, Releases (and sometimes Check-Ins). During project kick-off I’ll usually sit down with the team and go through an exercise of creating DoD’s for each of these concepts (Stories/Sprints/Releases). We’ll usually start by just brainstorming a bunch of activities that could end up in these various DoD’s. Here’s some examples: Code Reviews StyleCop FxCop User Manuals Updated Architecture Docs Updated Tested by QA Tested by UAT Installers Created Support Knowledge Base Updated Deployment Instructions (for Ops) written Automated Unit Tests Run Automated Integration Tests Run Then we start by arranging these activities into the place they occur today (e.g. Do you do UAT testing only once per release? every sprint? every feature?). If the team was previously Waterfall most of these activities probably end up in the Release DoD. An extremely mature agile team would probably have most of these activities in the DoD for the User Stories (because an extremely mature agile team will probably do continuous deployment and release every story). So what we need to do as a team, is work to move these activities from their current home (Release DoD) down into the Sprint DoD and eventually into the User Story DoD (and maybe into the lower-level Check-In DoD if we decide to use that). We don’t have to move them all down to User Story immediately, but as a team we figure out what we think we’re capable of moving down to the Sprint cycle, and Story cycle immediately, and that becomes our starting DoD’s. Over time the team makes an effort to continue moving activities down from Release->Sprint->Story as they become more agile and more mature. I try to encourage them to envision a world in which they deploy to production as each User Story is completed. They would need to be updating User Manuals, creating installers, doing UAT testing (typical Release cycle activities) on every single User Story. They may never actually reach that point, but they should envision that, and strive to keep driving the activities down closer to the User Story cycle s they mature. This is a great technique to give a team an easy-to-follow roadmap to mature their agile practices over time. Sure there’s other aspects to maturity outside of this, but it’s a great technique, that’s easy to visualize, to drive agility into the team. Just keep moving those activities (aka “gates”) down the board from Release->Sprint->Story. I’ll try to give an example of what a recent client of mine had for their DoD’s (this is from memory, so probably not 100% accurate): Release Create/Update deployment Instructions For Ops Instructional Videos Updated Run manual regression test suite UAT Testing In this case that meant deploying to an environment shared across the enterprise that mirrored production and asking other business groups to test their own apps to ensure we didn’t break anything outside our system Sprint Deploy to UAT Environment But not necessarily actually request UAT testing occur User Guides updated Sprint Features Video Created In this case we decided to create a video each sprint showing off the progress (video version of Sprint Demo) User Story Manual Test scripts developed and run Tested by BA Deployed in shared QA environment Using automated deployment process Peer Code Review Code Check-In Compiled (warning-free) Passes StyleCop Passes FxCop Create installer packages Run Automated Tests Run Automated Integration Tests PS – One of my clients had a great question when we went through this activity. They said that if a Sprint is by definition done when the end-date rolls around (time-boxed), isn’t a DoD on a sprint meaningless – it’s done on the end-date regardless of whether those other activities are complete or not? My answer is that while that statement is true – the sprint is done regardless when the end date rolls around – if the DoD activities haven’t been completed I would consider the Sprint a failure (similar to not completing what was committed/planned – failure may be too strong a word but you get the idea). In the Retrospective that will become an agenda item to discuss and understand why we weren’t able to complete the activities we agreed would need to be completed each Sprint.

    Read the article

  • what are some good interview questions for a position that consists of reviewing code for security vulnerabilities?

    - by John Smith
    The position is an entry-level position that consists of reading C++ code and identifying lines of code that are vulnerable to buffer overflows, out-of-bounds reads, uncontrolled format strings, and a bunch of other CWE's. We don't expect the average candidate to be knowledgeable in the area of software security nor do we expect him or her to be an expert computer programmer; we just expect them to be able to read the code and correctly identify vulnerabilities. I guess I could ask them the typical interview questions: reverse a string, print a list of prime numbers, etc, but I'm not sure that their ability to write code under pressure (or lack thereof) tells me anything about their ability to read code. Should I instead focus on testing their knowledge of C++? Ask them if they understand what a pointer is and how bitwise operators work? My only concern about asking that kind of question is that I might unfairly weed out people who don't happen to have the knowledge but have the ability to acquire it. After all, it's not like they will be writing a single line of code, and it's not like we are looking only for people who already know C++, since we are willing to train the right candidate. (It is true that I could ask those questions only to those candidates who claim to know C++, but I'd like to give the same "test" to everyone.) Should I just focus on trying to get an idea of their level of intelligence? In other words, should I get them to talk and pay attention to the way they articulate their thoughts, and so on?

    Read the article

  • Samba network sharing NTFS drives and root permissions from local drives

    - by Bill
    I'm able to share my internal 2ndry NTFS drives (sdb1,2 and 3) on the network with Windows computers now but even though Samba read/write is enabled, Windows network computers can only open files "read-only" and can't save files to the samba shared drives/folders. I try to set permissions in Ubuntu via folder and/or file properties even logged in root via Nautilus but all the samba shared folders and files are set as owner = root, accessible and does not allow me to change them to read/write, it just resets to root, accessible, in other words, I can't change permissions. I'm running Ubuntu 11.04 Gnome on an old Dell Dimension 2400. Also, in order to for me to copy or move any files from the Ubuntu drive to the sdb1,2 or 3 drives, I have to gksu nautilus. This consequently prevents me from copying .ISO files to my "Multisys" thumb drive too.

    Read the article

  • Cheaper alternatives to 99Designs.com (outsource CSS design)

    - by Chris Smith
    I'm designing my own website as a side project and I want the site to look professional. (Read, not designed by a programmer.) I don't mind spending a little money to have a professional do it, but design sites like 99designs.com cost way to much. (~$500+) Is there a cheaper (~$100 - $200) alternative for getting a designer to improve an existing site? (Things like updating the CCS or suggesting better ways for laying out the navigation.) Or is my best bet trying to pick up a freelancer on Craigslist?

    Read the article

  • PASS Budget Posted

    - by Bill Graziano
    If you’re a member of PASS you can view our FY2011 budget at http://www.sqlpass.org/AboutPASS/Governance.aspx.  Our detailed budget is 29 pages long and provides an incredibly detailed snapshot of where our money comes from and how we spend it.  I’ve also written a summary highlighting some of the changes from last year.  If you have any questions about the budget you can ask them here or on the PASS site.

    Read the article

  • PASS: SQLRally Thoughts

    - by Bill Graziano
    The PASS Board recently decided that we wouldn’t put another US-based SQLRally on the calendar until we had a chance to review the program. I wanted to provide some of my thinking around this. Keep in mind that this is the opinion of one Board member. The Board committed to complete two SQLRally events to determine if an event modeled between SQL Saturday and the Summit was viable. We’ve completed the two events and now it’s time to step back and review the program. This is my seventh year on the PASS Board. Over that time people have asked me why PASS does certain things. Many, many times my answer has been “Because that’s the way we did it last year”. And I am tired of giving that answer. We need to take a step back and review the US-based SQLRally before we schedule another one. It would be irresponsible for me as a Board member to commit resources to this without validating that what we’re doing makes sense for the organization and our members. I have no doubt that this was a great event for the attendees. We just need to validate it’s the best use of our resources. Please keep in mind that we haven’t cancelled the event. We’ve just said we need to review it before scheduling another one. My opinion is that some fairly serious changes are needed to the model before we consider it again – IF we do it again. I’ve come to that conclusion after speaking with the Dallas organizers, our HQ team, our Marketing team, other Board members (including one of the Orlando organizers), attendees in Orlando and Dallas and visiting other similar events. I should point out that their views aren’t unanimous on nearly any part of this event -- which is one of the reasons I want to take some time and think about this before continuing. I think it’s helpful to look at the original goals of what we were trying to accomplish. Andy Warren wrote these up in August of 2010. My summary of these goals and some thoughts on each one is below. Many of these thoughts revolve around the growth of SQL Saturdays. In the two years since that document was written these events have grown significantly. The largest SQL Saturdays are now over 500 people which mean they are nearly the same size as our recent SQLRally. Our goals included: Geographic diversity. We wanted an event in an area of the country that was away from any given Summit location. I think that’s still a valid goal. But we also have SQL Saturdays all over the country. What does SQLRally bring to this that SQLSaturday doesn’t? Speaker growth. One of the stated goals was to build a “farm club” for speakers. This gives us a way for speakers to work up to speaking at Summit by speaking in front of larger crowds. What does SQLRally bring to this that the larger SQL Saturdays aren’t providing? Pre-Conference speakers is one obvious answer here. Lower price. On a per-day basis, SQLRally is roughly 1/4th the price of the Summit. We wanted a way for people to experience something Summit-like at a lower price point. The challenge is that we are very budget constrained at that lower price point. International Event Model.  (I need to write more about this but I’m out of time.  I’ll cover it in the next installment.) There are a number of things I really like about SQLRally. I love the smaller conferences. They give me a chance to meet more people than at something the size of Summit. I like the two day format. That gives you two evenings to be at social events with people. Seeing someone a second day is a great way to build a bond with that person. That’s more difficult to do at a SQL Saturday. We also need to talk about the financial aspects of the event. Last year generated a small $17,000 profit on revenues of $200,000. Percentage-wise that’s reasonable but on an absolute basis it’s not a huge amount in our budget. We think this year will lose between $30,000 and $50,000 and take roughly 1,000 hours of HQ time. We don’t have detailed financials back yet but that’s our best guess at this point. Part of that was driven by using a convention center instead of a hotel. Until we get detailed financials back we won’t have the full picture around the financial impact. This event also takes time and mindshare from our Marketing team. This may sound like a small thing but please don’t underestimate it. Our original vision for this was something that would take very little time from our Marketing team and just a few mentions in the Connector. It turned out to need more than that. And all those mentions and emails take up space we could use to talk about other events and other programs. Last I wanted to talk about some of the things I’m thinking about. I don’t think it’s as simple as saying if we just fix “X” it all gets better. Is this that much better of an event than SQL Saturdays? What if we gave a few SQL Saturdays some extra resources? When SQL Saturdays were around 250 people that wasn’t as viable. With some of those events over 500 we need to reconsider this. We need to get back to a hotel venue. That will help with cost and networking. Is this the best use of the 1,000 HQ hours that we invested in the event? Is our price-point correct? I’m leaning toward raising our price closer to Summit on a per-day basis. I think this will let us put on a higher quality event and alleviate much of the budget pressure. Should growing speakers be a focus? Having top-line pre-conference speakers helps market the event. It will also have an impact on pricing and overall profit. We should also ask if it actually does grow speakers. How many of these people will eventually register for Summit? Attend chapters? Is SQLRally a driver into PASS or is it something that chapters, etc. drive people to? Should we have one paid day and one free instead of two paid days? This is a very interesting model that is used by SQLBits in the UK. This gives you the two day aspect as well as offering options for paid and free attendees. I’m very intrigued by this. Should we focus on a topic? Buried in the minutes is a discussion of whether PASS should have a Business Analytics conference separate from Summit. This is an interesting question to consider. Would making SQLRally be focused on a particular topic make it more attractive? Would that even be a SQLRally? Can PASS effectively manage the two events? (FYI - Probably not.) Would it help differentiate it from Summit and SQL Saturday? These are all questions that I think should be asked and answered before we do this event again. And we can’t do that if we don’t take time to have the discussion. I wanted to get this published before I take off for a few days of vacation. When I get back I’d like to write more about why the international events are different and talk about where we go from here.

    Read the article

  • Framework Folders and Duplicate File Names

    - by Kevin Smith
    I have been working with Framework folders a little bit in the past few days and found one unexpected behavior that is different from Contribution Folders (Folders_g). If you try and check a file into a Framework Folder that already exists in the folder it will allow it and rename the file for you. In Folders_g this would have generated an error and prevented you from checking in the file. A quick check of the Framework Folder configuration settings in the Application Administrator’s Guide for Content Server does not show a configuration parameter to control this. I'm still thinking about this and not sure if I like this new behavior or not. I guess from a user perspective this more closely aligns Framework Folders to how Windows handle duplicate file names, but if you are migrating from Folders_g and expect a duplicate file name to be rejected, this might cause you some problems.

    Read the article

  • What should I wear to a job interview with a game development company?

    - by Bill
    Many game development companies are less formal in terms of workplace attire than other types of software development houses. For example, I know that one place at which I will be interviewing soon has a predominant workplace culture of jeans and polos or t-shirts. Should I wear a suit? Shirt and tie? Shirt and sport jacket, with or without tie? I want to show that I'm serious about the job, but that I understand the culture, too.

    Read the article

  • Will you share your SQL Server configuration?

    - by Bill Graziano
    I regularly visit client sites and review their SQL Server configurations.  I come across all kinds of strange settings.  I’ve been thinking about a way to aggregate people’s configurations and see what’s common and what’s unique.  I used to do that with polls on SQLTeam.com.  I think we can find out more interesting things if we look at combinations of settings in relation to size and volume. I’ve been working on an application for another project that is similar.  It will be fairly easy to use that code for this.  I can have something up and running in a few days – if people are interested in it.  I admit that I often come up with ideas that just don’t make sense.  This may be one of them.  One of your biggest concerns has be how secure your data is.  My solution is not to store anything identifying.  The instance name and database names can both be “anonymized” and I don’t store the machine name or IP address or anything to do with logins. Some of the questions I’m curious about are: At what size database does the Enterprise Edition become prevalent? Given the total size of the databases how much RAM is common? How many people have multiple data files?  At what size does that become prevalent? How common is database mirroring?  Replication?  Log shipping? How common is full recovery mode?  At what data size does it become prevalent? I think those are all questions that are easy to answer -- with the right data.  The big question is whether or not people will share their SQL Server configurations.  I understand that organizations in regulated or high security environments can’t participate.  But I think that leaves many, many people that can.  Are you willing to share your configuration and learn about others?  I have a simple sign up form here.  It’s actually a mailing list signup that also captures your edition, number of servers and largest database.  The list will only be used for this project.  Is your SQL Server is configured correctly?  Do you wonder what the next step is as your data grows?  Take a second and sign up.

    Read the article

  • today's multi-device world for web development

    - by paul smith
    With the huge explosion of mobile devices and addition of HTML5/CSS3, there seems to be a shift towards "responsive" designs (i.e., adapting to smaller screen sizes) which seems to be achieved using CSS3's Media Queries. My question is, given the current need of adapting to both desktop and mobile, is it common practice to actually organize two versions of your website (one for desktop and one for mobile)? Or is there just one version with different css files for targeting different devices and screens? Handling just cross-browser (ie6, ff3, opera9, etc...) HTML4/5, CSS2/3 was already hard enough, but now we're expected to handle cross-device (phone, tablet, etc...) as well, so my assumption is company's would create a separate project for mobile and redirect based on the user agent, but this is just a guess.

    Read the article

  • Tracking Redirects Leading to your site

    - by Bill
    Is there a way in which I can find out if a user arrived at my site via a redirect? Here's an example: There are two sites, first.com & second.com. Any request to first.com will do a 302 redirect to second.com. When the request at second.com arrives, is there anyway to know it was redirected from first.com? Note that in this example you have no control over first.com. (In fact, it could be something bad, like kiddieporn.com.) Also note, because it is a redirect, it will not be in the HTTP referrer header.

    Read the article

  • Cheaper alternatives to 99Designs.com (outsource CSS design)

    - by Chris Smith
    I'm designing my own website as a side project and I want the site to look professional. (Read, not designed by a programmer.) I don't mind spending a little money to have a professional do it, but design sites like 99designs.com cost way to much. (~$500+) Is there a cheaper (~$100 - $200) alternative for getting a designer to improve an existing site? (Things like updating the CCS or suggesting better ways for laying out the navigation.) Or is my best bet trying to pick up a freelancer on Craigslist?

    Read the article

  • Blogger Template: How is inline style tag getting attached to img? [migrated]

    - by john Smith
    Examining a blogger template's img tag (data:post.thumbnailUrl) i've approached a mystery. An inline style tag controlling the width, margin and heigh perimeters are getting added to my img element. They are auto adjusting the images ratio to fit a smaller size. But I can't figure-out where this style tag script lives and how it's happening in my template. My template has no special javascript or jquery scripts. The full size images in the single posts page don't have this style tag. Is this a css or xml feature? element.style { margin-top: 0px; width: 301.0033444816054px; height: 200px; margin-left: -0.5016722408026908px; }

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >