Search Results

Search found 27143 results on 1086 pages for 'include path'.

Page 624/1086 | < Previous Page | 620 621 622 623 624 625 626 627 628 629 630 631  | Next Page >

  • Generic Repositories with DI & Data Intensive Controllers

    - by James
    Usually, I consider a large number of parameters as an alarm bell that there may be a design problem somewhere. I am using a Generic Repository for an ASP.NET application and have a Controller with a growing number of parameters. public class GenericRepository<T> : IRepository<T> where T : class { protected DbContext Context { get; set; } protected DbSet<T> DbSet { get; set; } public GenericRepository(DbContext context) { Context = context; DbSet = context.Set<T>(); } ...//methods excluded to keep the question readable } I am using a DI container to pass in the DbContext to the generic repository. So far, this has met my needs and there are no other concrete implmentations of IRepository<T>. However, I had to create a dashboard which uses data from many Entities. There was also a form containing a couple of dropdown lists. Now using the generic repository this makes the parameter requirments grow quickly. The Controller will end up being something like public HomeController(IRepository<EntityOne> entityOneRepository, IRepository<EntityTwo> entityTwoRepository, IRepository<EntityThree> entityThreeRepository, IRepository<EntityFour> entityFourRepository, ILogError logError, ICurrentUser currentUser) { } It has about 6 IRepositories plus a few others to include the required data and the dropdown list options. In my mind this is too many parameters. From a performance point of view, there is only 1 DBContext per request and the DI container will serve the same DbContext to all of the Repositories. From a code standards/readability point of view it's ugly. Is there a better way to handle this situation? Its a real world project with real world time constraints so I will not dwell on it too long, but from a learning perspective it would be good to see how such situations are handled by others.

    Read the article

  • "System.Data.OracleClient requires Oracle client software version 8.1.7 or greater." Error Message

    - by Jandost Khoso
    Quick resolution: Give full permission to AUTHENTICATED USERS in following folders. a) ORACLE_HOME b) Program Files\ORACLE   Check your PATH. You might have installed different clients in your system and your .NET application is pointing to a home with inappoperiate client. What your .NET application should load is OCI.DLL with File version more than 8.1.7. According to the MSDN document Oracle and ADO.NET:   "The .NET Framework Data Provider for Oracle provides access to an Oracle database using the Oracle Call Interface (OCI) as provided by Oracle Client software. The functionality of the data provider is designed to be similar to that of the .NET Framework data providers for SQL Server, OLE DB, and ODBC. "     The MSDN document System Requirements (Oracle) says: "The .NET Framework Data Provider for Oracle requires Microsoft Data Access Components (MDAC) version 2.6 or later. MDAC 2.8 SP1 is recommended. You must also have Oracle 8i Release 3 (8.1.7) Client or later installed. "   Both the .NET Framework Data Provider for Oracle and Oracle Data Provider for .NET are data providers to access Oracle database. The former ships with .NET Framework and requires Oracle client version 8.1.7 or above. The latter is provided by Oracle company and requires Oracle client version 9.2 or later.     The Oracle Data Provider for .NET (ODP.NET) features optimized ADO.NET data access to the Oracle database. ODP.NET allows developers to take advantage of advanced Oracle database functionality, including Real Application Clusters, XML DB, and advanced security.   See the document Comparing the Microsoft .NET Framework 1.1 Data Provider for Oracle and the Oracle Data Provider for .NET for more information about the difference.

    Read the article

  • Authorization error when testing FTP to UNC

    - by user64204
    We have a Windows Server 2008 R2 with Active Directory (hereafter called DC) running as a domain controller on which we have IIS and an FTP site installed. We have a second Server 2008 (hereafter called SHARE) which is joined to that domain and has a disk shared as a network share (\\share\Office). That network share is used as the ftp's physical path on DC. We've tested the FTP from the IIS FTP configuration panel, by clicking on Basic Settings... then Test Settings.... When setting Administrator as a username with the Connect as... option, everything is fine: When no user is provided we can the below error: Q1: Could someone explain in more understandable terms what is written in the Details text area?

    Read the article

  • Apache still running after uninstalling

    - by Ruslan Osipov
    I am trying to uninstall apache to install nginx, but it doesn't seem to work. $ ps aux | grep httpd root 22348 0.0 0.2 167252 8864 ? Ss 14:33 0:00 /usr/sbin/httpd -k start -DSSL apache 22353 0.0 0.1 167624 6088 ? S 14:33 0:00 /usr/sbin/httpd -k start -DSSL apache 22354 0.0 0.1 167252 5292 ? S 14:33 0:00 /usr/sbin/httpd -k start -DSSL apache 22355 0.0 0.1 167252 5052 ? S 14:33 0:00 /usr/sbin/httpd -k start -DSSL apache 22356 0.0 0.1 167252 5052 ? S 14:33 0:00 /usr/sbin/httpd -k start -DSSL apache 22357 0.0 0.1 167252 5052 ? S 14:33 0:00 /usr/sbin/httpd -k start -DSSL apache 22797 0.0 0.1 167252 5052 ? S 14:38 0:00 /usr/sbin/httpd -k start -DSSL 1003 22883 0.0 0.0 9388 884 pts/1 S+ 14:46 0:00 grep httpd $ which apache2 $ dpkg -S apache bash-completion: /etc/bash_completion.d/apache2ctl apparmor: /etc/apparmor.d/abstractions/apache2-common $ dpkg -S `which httpd` dpkg-query: no path found matching pattern /usr/sbin/httpd. The package seem to be uninstalled, but the processes are still running. And /usr/bin/httpd is still there. Any hints?

    Read the article

  • Functional testing in the verification

    - by user970696
    Yesterday my question How come verification does not include actual testing? created a lot of controversy, yet did not reveal the answer for related and very important question: does black box functional testing done by testers belong to verification or validation? ISO 12207:12208 here mentiones testing explicitly only as a validation activity, however, it speaks about validation of requirements of the intended use. For me its more high level, like UAT test cases written by business users ISO mentioned above does not mention any specific verification (7.2.4.3.2)except for Requirement verification, Design verification, Document and Code & Integration verification. The last two can be probably thought as unit and integrated testing. But where is then the regular testing done by testers at the end of the phase? The book I mentioned in the original question mentiones that verification is done by static techniques, yet on the V model graph it describes System testing against high level description as a verification, mentioning it includes all kinds of testing like functional, load etc. In the IEEE standard for V&V, you can read this: Even though the tests and evaluations are not part of the V&V processes, the techniques described in this standard may be useful in performing them. So that is different than in ISO, where validation mentiones testing as the activity. Not to mention a lot of contradicting information on the net. I would really appreciate a reference to e.g. a standard in the answer or explanation of what I missed in the ISO. For me, I am unable to tell where the testers work belong.

    Read the article

  • Tracking contributions from contributors not using git

    - by alex.jordan
    I have a central git repo located on a server. I have many contributors that are not tech savvy, do not have server access, and do not know anything about git. But they are able to contribute via the project's web side. Each of them logs on via a web browser and contributes to the project. I have set things up so that when they log on, each user's contributions are made into a cloned repo on the server that is specifically for that user. Periodically, I log on to the server, visit each of their repos, and do a git diff to make sure they haven't done anything bad. If all is well, I commit their changes and push them to the central repo. Of course I need to manually look at their changes so that I can add an appropriate commit message. But I would also like to track who made the changes. I am making the commit, and I (and the web server) are the only users that are actually writing anything to the server. I could track this in the commit messages. While this strikes me as wrong, if this is my only option, is there a way to make userx's cloned repo always include "userx: " before each commit message that I add, so that I do not have to remind myself which user's repo I am in? Or even better, is there an easy way for me to make the commit, but in such a way as I credit the user whose cloned repo I am in?

    Read the article

  • Is a disk/ata timeout exception dangerous?

    - by j-g-faustus
    I have a few hard drives in mdadm RAID 5 configured to go to standby after a few minutes of inactivity. (Using hdparm.conf spindown_time.) At irregular intervals I get messages like these in dmesg: [ 1840.251661] ata4.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [ 1840.251722] ata4.00: failed command: SMART [ 1840.251758] ata4.00: cmd b0/d5:01:06:4f:c2/00:00:00:00:00/00 tag 0 pio 512 in [ 1840.251759] res 40/00:14:50:2e:04/00:00:02:00:00/40 Emask 0x4 (timeout) [ 1840.251858] ata4.00: status: { DRDY } [ 1840.251888] ata4: hard resetting link [ 1840.600742] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 1840.601521] ata4.00: configured for UDMA/133 [ 1840.601547] ata4: EH complete [337877.713988] ata4.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [337877.714019] ata4.00: failed command: SMART [337877.714038] ata4.00: cmd b0/d5:01:06:4f:c2/00:00:00:00:00/00 tag 0 pio 512 in [337877.714039] res 40/00:04:90:10:81/00:00:00:00:00/40 Emask 0x4 (timeout) [337877.714089] ata4.00: status: { DRDY } [337877.714107] ata4: hard resetting link [337878.063085] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [337878.063743] ata4.00: configured for UDMA/133 [337878.063764] ata4: EH complete I think the exception is caused by smartd when a drive does not wake up quickly enough. There are no issues (that I can tell) in accessing the drives normally through the file system - it takes a few seconds longer than normal when they are asleep, but there are no exceptions. Is this something I should worry about, as a potential symptom on something that could corrupt a drive over time? Or can I safely ignore it as part of normal operation? Edit: By request: smartctl -a for sdaand sde, both disks are members of the array. If ata4is the same as scsi-4 then sde is the one that gave the error above, according to /dev/disk/by-path.

    Read the article

  • Museum of Modern Art Starts Video Game Collection; Acquires Myst, Pac-Man, and More

    - by Jason Fitzpatrick
    The Museum of Modern Art is weighing in on the video-games-as-art debate by starting a collection of iconic video games and putting them up for public display. Read on to see what games are included in the initial batch and the MoMA’s reasons behind starting a video game collection. Although the MoMA is slated to grow to over 40 titles, the seed batch is 14 titles including: Pac-Man, Tetris, Sim City 2000, Myst, Portal, and Dwarf Fortress. In the announcement they explain the motivation for building a video game collection: Are video games art? They sure are, but they are also design, and a design approach is what we chose for this new foray into this universe. The games are selected as outstanding examples of interaction design—a field that MoMA has already explored and collected extensively, and one of the most important and oft-discussed expressions of contemporary design creativity. Our criteria, therefore, emphasize not only the visual quality and aesthetic experience of each game, but also the many other aspects—from the elegance of the code to the design of the player’s behavior—that pertain to interaction design. In order to develop an even stronger curatorial stance, over the past year and a half we have sought the advice of scholars, digital conservation and legal experts, historians, and critics, all of whom helped us refine not only the criteria and the wish list, but also the issues of acquisition, display, and conservation of digital artifacts that are made even more complex by the games’ interactive nature. This acquisition allows the Museum to study, preserve, and exhibit video games as part of its Architecture and Design collection. The above quote is only a small snippet of a much lengthier look at the benefits of examining and preserving video games, hit up the link below to check out the full post including future titles the MoMA would like to include in their archive. Video Games: 14 in the Collection, for Starters [Inside/Out] How To Boot Your Android Phone or Tablet Into Safe Mode HTG Explains: Does Your Android Phone Need an Antivirus? How To Use USB Drives With the Nexus 7 and Other Android Devices

    Read the article

  • How do you check Driver Verifier logs on Windows 7 after catching a faulty driver?

    - by Wolf
    I kept getting BSODs on a clean install of Windows 7 (plus updates), so I decided to run Driver Verifier. I had to select all drivers, since it didn't catch the culprit when I didn't include Microsoft drivers. I know it is not a hardware problem since everything is working fine on Linux and memtest86+ is not reporting any errors in the RAM (8 GB). This time, it caught the faulty driver and gave me a BSOD telling me so. Using WhoCrashed, I could verify what was the last error message with the parameters and the source. Yet, the source is always the kernel (ntoskrnl.exe) and the bugcheck this time was 0xC4 (0x85, 0xFFFFF9804429AFC0, 0x2, 0x11B948). After searching on the web, I found out "the driver called MmMapLockedPages without having locked down the MDL pages." As I am not developing any driver, this is of no use to me. However, I would like to know which driver caused Driver Verifier to trigger an alert, so I can either disable it, or rollback to a previous version in order not to get crashes anymore.

    Read the article

  • Need help to make a decision in career switch over? [closed]

    - by Fero
    I am a Software Engineer having 4 Years of experinece in web development using PHP, Drupal, MySql, Ajax and client site technologies like javascript, jquery,html and more. I have decided two platforms to switch over my career. SAP-ABAP (Because ABAP is related to coding) SALES FORCE One and only reason is that I am not getting good pack for the technologies what I am working with. Even top level companies are not ready to pay for this technologies. (And I am not expecting more.) To be honest I am good at technical and HR interviews too. So, I started to make an analysis of highly payable platforms and I got these two. SAP and Salesforce (Probabilty of On-site opportunity is also very high on both) Here my questions are: I am totally new to the above mentioned technologies. Which will be best suit for me ? Having basic ideas of the platforms what I have decided - But I am confused to choose I am having Good Coding experiencein PHP, Drupal as well as good experience in MySql. Having very good experience in creating sites related to E-Commerce, LMS, Q&A sites, Travel Sites, Blogs, Social networking site and more. Which I can learn easily or for which I can get good documentations online Kindly understand that I am not creating a debate over here. I hope Professionals over here can Show me the correct path.... I am waiting to travel on that...

    Read the article

  • Partner Showcase -- GreyHeller

    - by PeopleTools Strategy
    This is the next in a series of posts spotlighting some of our creative partners.  GreyHeller is a PeopleSoft-focused software company founded by PeopleTools alumni Larry Grey and Chris Heller.  GreyHeller’s products focus on addressing the technology needs of PeopleSoft customers in the areas of mobile Enablement, reporting/business intelligence, security, and change management.  The company helps customers protect and extend their investment in PeopleSoft.GreyHeller’s products and services are in use by nearly 100 PeopleSoft customers on 6 continents.  Their product solutions are lightweight bolt-ons--extensions to a customer’s PeopleSoft environment requiring no new infrastructure.  This makes for rapid implementations.A major area of interest for PeopleSoft customers these days is mobile enablement.  GreyHeller's current mobile implementations include the following customers: Texas Christian University (Live:  TCU student newspaper article here) Coppin State University (Live) University of Cambridge (June go-live) HealthSouth (June go-live) Frostburg State Univrsity (Q3 go-live) Amedisys (Q3 go-live) GreyHeller maintains a PeopleTools-focused blog that provides tips, techniques, and code snippets aimed at helping PeopleSoft customers make the most of their PeopleSoft system.  In addition to their blog, the GreyHeller team conducts and records weekly webinars that demonstrate latest PeopleTools features and Tips and techniques.  Recordings of these webinars can be accessed here.Visit GreyHeller’s web site for more information on the company and its work.

    Read the article

  • Configuring Nginx for Wordpress and Rails

    - by Michael Buckbee
    I'm trying to setup a single website (domain) that contains both a front end Wordpress installation and a single directory Ruby on Rails application. I can get either one to work successfully on their own, but can't sort out the configuration that would let me coexist. The following is my best attempt, but it results in all rails requests being picked up by the try_files block and redirected to "/". server { listen 80; server_name www.flickscanapp.com; root /var/www/flickscansite; index index.php; try_files $uri $uri/ /index.php; location ~ \.php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/flickscansite$fastcgi_script_name; } passenger_enabled on; passenger_base_uri /rails; } An example request of the Rails app would be http://www.flickscan.com/rails/movies/upc/025192395925

    Read the article

  • Creating Parent-Child Relationships in SSRS

    - by Tim Murphy
    As I have been working on SQL Server Reporting Services reports the last couple of weeks I ran into a scenario where I needed to present a parent-child data layout.  It is rare that I have seen a report that was a simple tabular or matrix format and this report continued that trend.  I found that the processes for developing complex SSRS reports aren’t as commonly described as I would have thought.  Below I will layout the process that I went through to create a solution. I started with a List control which will contain the layout of the master (parent) information.  This allows for a main repeating report part.  The dataset for this report should include the data elements needed to be passed to the subreport as parameters.  As you can see the layout is simply text boxes that are bound to the dataset. The next step is to set a row group on the List row.  When the dialog appears select the field that you wish to group your report by.  A good example in this case would be the employee name or ID. Create a second report which becomes the subreport.  The example below has a matrix control.  Create the report as you would any parameter driven document by parameterizing the dataset. Add the subreport to the main report inside the row of the List control.  This can be accomplished by either dragging the report from the solution explorer or inserting a Subreport control and then setting the report name property. The last step is to set the parameters on the subreport.  In this case the subreport has EmpId and ReportYear as parameters.  While some of the documentation on this states that the dialog will automatically detect the child parameters, but this has not been my experience.  You must make sure that the names match exactly.  Tie the name of the parameter to either a field in the dataset or a parameter of the parent report. del.icio.us Tags: SQL Server Reporting Services,SSRS,SQL Server,Subreports

    Read the article

  • Creating a Sandboxed Instance

    - by Ricardo Peres
    In .NET 4.0 the policy APIs have changed a bit. Here's how you can create a sandboxed instance of a type, which must inherit from MarshalByRefObject: static T CreateRestrictedType<T>(SecurityZone zone, params Assembly [] fullTrustAssemblies) where T : MarshalByRefObject, new() { return(CreateRestrictedType<T>(zone, fullTrustAssemblies, new IPermission [0]); } static T CreateRestrictedType<T>(SecurityZone zone, params IPermission [] additionalPermissions) where T : MarshalByRefObject, new() { return(CreateRestrictedType<T>(zone, new Assembly [0], additionalPermissions); } static T CreateRestrictedType<T>(SecurityZone zone, Assembly [] fullTrustAssemblies, IPermission [] additionalPermissions) where T : MarshalByRefObject, new() { Evidence evidence = new Evidence(); evidence.AddHostEvidence(new Zone(zone)); PermissionSet evidencePermissionSet = SecurityManager.GetStandardSandbox(evidence); foreach (IPermission permission in additionalPermissions ?? new IPermission[ 0 ]) { evidencePermissionSet.AddPermission(permission); } StrongName [] strongNames = (fullTrustAssemblies ?? new Assembly[0]).Select(a = a.Evidence.GetHostEvidence<StrongName>()).ToArray(); AppDomainSetup adSetup = new AppDomainSetup(); adSetup.ApplicationBase = Path.GetDirectoryName(typeof(T).Assembly.Location); AppDomain newDomain = AppDomain.CreateDomain("Sandbox", evidence, adSetup, evidencePermissionSet, strongNames); ObjectHandle handle = Activator.CreateInstanceFrom(newDomain, typeof(T).Assembly.ManifestModule.FullyQualifiedName, typeof(T).FullName); return (handle.Unwrap() as T); } SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • Set security on pattern of sub folders (Server 2003)

    - by Mark Major
    I have a folder structure similar to the one shown below these paragraphs. How do I change security on every 'Photos' folder without clicking through each individually in Windows Explorer? There are about 50 top level folders (Bob, Jim, Eva, etc, etc) which have the same layout of folders inside. I am keen for any suggestions, either scripting or GUI. I am on Windows Server 2003. Cheap/free method would be good, as the company is part of a registered charity. Ideally I would like to do this via DFS path. E.G. \\mycompany.local\Shared\Staff\Bob\ Thanks for reading. Thanks for any info. Mark Bob Review Profile Photos Jim Review Profile Photos Eva Review Profile Photos

    Read the article

  • How to read default key value with dconf or gsettings?

    - by Zta
    I would like to know the default value of a dconf/gsettings key. My question is a followup of the question below: Where can I get a list of SCHEMA / PATH / KEY to use with gsettings? What I'm trying to do, so create a script that reads all my personal preferences so I can back them up and restore them. I plan to iterate though all keys, like the script above, see what keys have been changed from their default value, and make a note of these, that can be restored later. I see that the dconf-editor display the keys' default value, but I'd very much like to script this. Also, I don't see how parsing the schemas /usr/share/glib-2.0/schemas/ can be automated. Maybe someone can help? gsettings get-default|list-defaults would be nice =) (Geesh, it was much easier in the old days where you just kept your ~/.somethingrc in subversion ... =\ Based on the answer given below, I've updated the script to print schema, key, key's data type, default value, and actual value: #!/bin/bash for schema in $(gsettings list-schemas | sort); do for key in $(gsettings list-keys $schema | sort); do type="$(gsettings range $schema $key | tr "\n" " ")" default="$(XDG_CONFIG_HOME=/tmp/ gsettings get $schema $key | tr "\n" " ")" value="$(gsettings get $schema $key | tr "\n" " ")" echo "$schema :: $key :: $type :: $default :: $value" done done This workaround basically covers what I need. I'll continue working on the backup scrip from here.

    Read the article

  • Good design for a simple site that contains a blog

    - by bporter
    What is a good design for a simple web site with mostly static pages and a blog? I am helping a friend build this for their small business. We are looking for a simple approach that can be implemented fairly quickly. (I am a programmer and can help with coding, hosting, etc.) One option is to use a site like virb, which lets you choose from one of their themes and build a site pretty easily. You can also include a blog. They host the site for a pretty low monthly rate. I recommended this option, but my friend wants a design that is unique and custom. So, I took one of the themes and started modifying the HTML and CSS. This might still be a good option, but... ...If we are going to greatly modify it, why not just create the static pages from scratch and use something like Wordpress for the blog. Is this a good option? It looks fairly easy to integrate Wordpress with a site so that the design and behavior are really cohesive. Is this a good idea? Do you recommend any other approaches?

    Read the article

  • Using AdSense to show ads to logged-in users

    - by John
    I know that you can grant authorization permissions to Google AdSense so that it can 'log in' and see what other logged in users can see (e.g. in a private forum), so that the ads it displays are better targetted. Extending this principle further: I am making a site which will show completely different content for each individual user (i.e. not 'common' content like a forum in which everybody sees essentially the same thing). You could think of this content as similar to the way each Facebook user has a different news feed, but it is the 'same' page. Complicating things further, the URLs for this site will be simple, e.g. '/home' and '/somepage', and will not usually include unique identifiers to differentiate between users (e.g. '/home?user=32i42'). My questions are: Is creating an account purely for AdSense to log in to the site with worth it in this case, seeing as it will be seeing it's own 'personalized' version and not any other user's? More importantly: is that against the Google AdSense Terms of Service? (I can't seem to figure that one out) How would you go about this problem?

    Read the article

  • Laptop abruptly powers off after few seconds of booting

    - by Alan Mendelevich
    I have a 3 year old HP Pavilion dv2208 laptop. Recently it started abruptly powering off in like ~20-30 seconds into Windows boot sequence after almost every reboot/shutdown. Even if I leave it in Repair/Start Windows Normally stage it powers off anyway. The only way I managed to workaround this is to enter BIOS setup screen and leave it on for no less than 10 minutes. I don't know what happens there but this helps every time. Any ideas of possible ways to fix this that don't include replacing motherboard are highly appreciated. P.S.: I've tried resetting BIOS to defaults, updating to the latest BIOS version, etc. Happens with both Vista and Windows 7.

    Read the article

  • Php profiling on production server or other options

    - by absentx
    Alright I need some help here. I am commonly asked to speed up certain sections of some websites that I program for. I have yet to be able to figure out how to use a good php diagnosis/profiling tool. Some things to consider: The sites I am working on are already built, getting a testing server set up locally is just a huge pain..I have to rewrite include paths and just so many things. This is a results oriented deal and spending days to get a site fully working on a testing platform so I can debug one page probably isn't an option. I can write tons of php, but I have no clue how to interact or mess with servers. So every tutorial I read about setting up xdebug or xhprof all seem to involve getting something installed on a production server that I don't have access to or have no clue how to work with. So are there any solutions out there that will show me where my php is slow without having to do all sorts of server stuff that I just don't know how to do? Xhprof seems to be the closest to useable for me but from what I can tell it still has to be installed on a server. If anyone can just point me in the right direction on this I would be very grateful. Maybe getting these things put on the server isn't a big deal...but I have never interacted with server command lines or anything like that. I suppose I should start sometime but I really have no idea where to start. Plus I realize that profiling on a live platform is not the greatest idea either but I feel I am in a tough spot. I have speed issues to solve and setting up a local environment while a great idea, just doesn't seem real practical at the moment.

    Read the article

  • What should happen at the start of a software project startup?

    - by Willem
    A quick introduction My college semesters include a 8 week project working for an actual company with a software need in order to get some much needed practical experience. I have just started such a project with 5 other students. We're required to spend roughly 40 hours a week per student on this project. We're working with SCRUM as the software development method, this was assigned by our teachers. The question Day one of the project just ended which has created some questions for me as to how to start a project in the 'real world'. Our first day included working on a project planning document (not sure what the English term is), creating a appointment with the company for an introduction and the opportunity to start specifying the requirements and setting up some standards for the behavior within the group. However these items didn't take that long to finish. We've made some concrete plans for tomorrow and the day after we'll meet the company. This still leaves several hours of 'work-time' unspent. Is it usual not being able to fill every hour of a day for work at the start of a project or are we simply too inexperienced to see what work needs to be done at this stage of a project, or are we, perhaps, going through the above list too fast? How does this work in the 'real world'? Do you spend your time wondering 'what should I do now', or do you have a clear view of what you're supposed to do at that moment?

    Read the article

  • Allow WRITE access to local folders machine in 2003SBS AD

    - by Dan M.
    Have a SBS2003 client with a mess of a domain that is in process of being cleaned. But, for the life of me I cannot find a setting that will allow write access to the local hard disk for domain users with redirected profiles(to the server). This is needed only for one program that will not follow a symbolic link to the network path, instead it seems to be hard coded to the %appdata% folder but only on the c: drive.... So question is how can I allow "Domain users" write access to the local %appdata% directory? I have tried setting it manually on a machine but it kept resetting to RO no matter how many times I tried. Every time I would un-check the RO property it would reset sometime right after i hit OK. Thanks in advance! Dan

    Read the article

  • disable 250 character URL limit in Internet Explorer

    - by Keltari
    Users of a SharePoint Document Library are getting this error: The URL for this file is too long for the application. A temporary copy of this file will be opened on your computer. You must save this copy as a new file. After doing some research, it appears Internet Explorer has a limit of about ~250 characters for a URL. Some URLs provided by SharePoint far exceed this limit. One example being 790 characters long. Is there a way to disable this limit? I have looked, but there doesnt appear to be a solution, other than shortening the folder/path names.

    Read the article

  • How to associate all file types within Wine with its corresponding native application?

    - by MestreLion
    This is easily done for a single file type, as answered in How to associate a file type within Wine with a native application?, by creating a .reg for the desired filetype. But this is for AVI only. I use some wine apps (uTorrent, Soulseek, Eudora, to name a few) that can launch a wide range of files. Email attachments, for example, can be JPG, DOC, PDF, PPS... its impossible (and not desirable) to track down all possible file types that one may receive in an email or download in a torrent. So I neeed a solution to be more generic and broad. I need the file association to honor whatever native app is currently configured. And I want this to be done for all file types configured in my system. I've already figured out how to make the solution generic. Simply replacing the launched app in .reg for winebrowser, like this: [HKEY_CLASSES_ROOT\.pdf] @="PDFfile" "Content Type"="application/pdf" [HKEY_CLASSES_ROOT\PDFfile\Shell\Open\command] @="C:\\windows\\system32\\winebrowser.exe \"%1\"" Ive tested this and it works correctly. Since winebrowser uses xdg-open as a backend, and converts my windows path to a Unix one, the correct (Linux) app is launched. So I need a "batch" updater to wine's registry, sort of a wine-update-associations script that I can run whenever a new app is installed. Maybe a tool that can: List all Mime Types types in my system that have a default, installed app associated Extract all the needed info (glob, mime type, etc) Generate the .REG file in the above format The tricky part is: i've searched a LOT to find info about how association is done in Ubuntu 10.10 onwards, and documentation is scarce and confusing, to say the least. Freedesktop.org has no complete spec, and even Gnome docs are obsolete. So far I've gathered 4 files that contain association info, but im clueless on which (or why) to use, or how to use them to generate the .reg file: ~/.local/share/applications/mimeapps.list ~/.local/share/applications/miminfo.cache /usr/share/applications/miminfo.cache /etc/gnome/defaults.list Any help, script or explanation would be greatly appreciated! Thanks!

    Read the article

  • How to configure sudoers to always keep LD_LIBRARY_PATH envrionment variable?

    - by Yanick Girouard
    No matter what I try, it seems that the LD_LIBRARY_PATH environment variable is not kept after I run a command with sudo. The only way I managed to have it stick, is to prefix my sudo command with LD_LIBRARY_PATH=/the/path whenever I call it from the command-line, but I would like to not have to do this every time. It seems the env_keep option ignores this variable, and so does the exempt_group option. My %group currently has ALL=(ALL) NOPASSWD:ALL as its access in sudoers. I would like this specific environment variable to be kept for any command I run. How can I do this? My server is running Red Hat Enterprise Linux 5.7.

    Read the article

< Previous Page | 620 621 622 623 624 625 626 627 628 629 630 631  | Next Page >