Search Results

Search found 91480 results on 3660 pages for 'large data in sharepoint list'.

Page 111/3660 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Excel Graph: How can I turn data below in to a 'time based' graph

    - by Mike
    In my spreadsheet I am collecting time periods when certain values have been changed. The user is restricted to 4 time periods. I would like to show the data based on thos time periods. I've included a mock up' of the data and the type of graph I would like to create. I've tried to create it for the last hour but am obviously missing something so thought I'd ask around. http://i48.tinypic.com/55lezr.jpg Many thanks for any help Mike P.S How do I make this image appear in the message and not as a link?

    Read the article

  • Data Archiving vs not

    - by Recursion
    For the sake of data integrity, is it wiser to archive your files or just leave them unarchived. No compression is being used. My thinking is that if you leave your files unarchived, if there is some form of corruption it will only hurt a smaller number of files. Though if you archive, lets say all of your documents, if there is even the slightest corruption, the entire archive is unrecoverable. So whats the best way to keep a clean file system, but not be subject to data corruption.

    Read the article

  • Why Standards Place Limits on Data Transfer Rates?

    - by Mehrdad
    This is a rather general question about hardware and standards in general: Why do they place limits on data transfer rates, and disallow manufacturers from exceeding those rates? (E.g. 100 Mbit/s for Wireless G, 150 Mbit/s for Wireless N, ...) Why not allow for some sort of handshake protocol, whereby the devices being connected agree upon the maximum throughput that they each support, and use that value instead? Why does there have to be a hard-coded limit, which would require a new standard for every single improvement to a data rate?

    Read the article

  • Converge Voice and Data networks using Sonicwall

    - by skinneejoe
    We are looking to converge VOIP and Data traffic onto a single wire so that our client's VOIP phones pass data through to the users computer. We are specing out a new Sonicwall NSA appliance to handle routing functions and layer 2 switches to manage VLANS. Not a huge network, medium sized. What should I know about converging the networks onto a single wire? Obviously I'll want to prioritize voice traffic, is this handled solely in the Sonicwall with QoS configurations or do the layer 2 switches need to be configured differently? Any other pitfalls I should be aware of, or any good resources for learning more?

    Read the article

  • Data Archiving vs not

    - by Recursion
    For the sake of data integrity, is it wiser to archive your files or just leave them unarchived. No compression is being used. My thinking is that if you leave your files unarchived, if there is some form of corruption it will only hurt a smaller number of files. Though if you archive, lets say all of your documents, if there is even the slightest corruption, the entire archive is unrecoverable. So whats the best way to keep a clean file system, but not be subject to data corruption.

    Read the article

  • Does data mining qualify as an abuse?

    - by Hybryd
    Hi all, today I had a strange experience with my ISP. They disabled my password for internet connection, and when I called them, they enabled it again, but they didn't say why it happened. In the last couple of days I was running a data mining that I made for one forum to get some useful info about business that I'm in. So I thought, maybe my ISP figured that 10,000 page requests in couple of hours to the same site may be some kind of attack. What do you think, does it qualify as an attack? Is it even ok to data mine in that way?

    Read the article

  • Excel data range - to sum series within date range

    - by Mark
    I have a set of data that I would like to manipulate but my problem is not straight forward. In this data I have date ranges that include multiple entries of the same date on some days and not on others. What I need to accomplish is to manage a trading account so that no more than 1% of the account is put at risk on any given day (retrospectively). To do this, when a series of trades falls on the same day, I need to total the risk associated with each of those trades so that I can limit the total risk of the combined trades by limiting the position size I take in each. Here is a sample set of the data I am working with. As you can see, there are 5 trades on Jan 3. Each of these trades comes with a risk value. I need to add the risk values of these 5 trades so that I can compare it to an account value and then determine if I should take more than 1 position in each trade. As you can see there are different numbers of trades that occur on the 4th, 5th 6th and 9th. I need the values returned in each row so that I can further manipulate them in the spreadsheet. I am not new to Excel, but cannot come up with a solution here - your input is much appreciated. Forgive the presentation below - I cannot upload a pic (new user) and the format does not carry across from excel. I have aligned the first several lines manually. Thx. Date ............. Pair ....... L/S ...... Initial Risk .......Win ......Loss ....BE. ....Avg Gain Avg Loss pips/swing 1/3/2012 ....EUR/USD ....S .............15 ................1 ..................................10 ..........................15. .. 1/3/2012 ....USD/CHF .....L ............15 ..........................................1 ..........0 1/3/2012 ....AUD/USD ....S .............15 ................1 .................................16 ...........................18 1/3/2012 ....NZD/USD ....S .............15 ................1 ...................................7 .............................8 1/3/2012 ....AUD/JPY .... S .............10 ................1 .................................25 ............................20 1/4/2012 ....EUR/USD ....L .............20 ................1 .................................19 ...........................19 1/4/2012 ....USD/CHF ....S ............ 15 ................1 .................................17 ...........................20 1/4/2012 EUR/JPY L 20 1 0 1/5/2012 EUR/USD L 15 1 10 20 1/5/2012 GBP/USD L 20 1 15 20 1/5/2012 USD/CHF S 15 1 0 1/5/2012 USD/JPY S 10 1 7 10 1/5/2012 USD/CAD S 15 1 28 36 1/5/2012 AUD/USD L 15 1 20 20 1/6/2012 USD/CAD S 15 1 5 -10 1/6/2012 EUR/JPY L 15 1 7 7 1/9/2012 AUD/USD S 15 1 22 30 1/9/2012 NZD/USD S 15 1 10 15

    Read the article

  • URGENT: help recovering lost data

    - by Niels Kristian
    I have made a directory: sudo mkdir /ssd, the directory was supposed to be mounted to a raid array called md3. This was done by adding /dev/md3 /ssd auto defaults 0 0 to fstab. Then after a while where I had used the directory, I realized that I had forgotten to run sudo mount -a - and then I did, and now the data is gone. I tried to uncomment the line in fstab and run sudo mount -a but that didn't get back my data. What can I do!? CONTENT OF FSTAB: proc /proc proc defaults 0 0 none /dev/pts devpts gid=5,mode=620 0 0 /dev/md/0 none swap sw 0 0 /dev/md/1 /boot ext3 defaults 0 0 /dev/md/2 / ext4 defaults 0 0 /dev/md3 /ssd auto defaults 0 0

    Read the article

  • Recover data from SD card

    - by Paul Tarjan
    I have a 2GB kingston microSD card which is about 3 years old. I put it in a reader today in my Windows Vista computer, wrote a 32MB file onto it, safely removed it, and then tried to read it elsewhere. Nothing. Putting it back in vista it now says You need to format the disk in drive F: before you can use it. What should I do? I have access to many computers and OSes if your recommendations need that. I would be very sad if I lost all the contents of the card. Most of the data is backed up, but there are a few things that aren't. :( Doing a # dd if=/dev/sdg of=~/tmp/sd.bin gives me a 2 gig file, and grepping the file it seems like lots of my data is still there, how can I put it back together?

    Read the article

  • How to have Excel data validation display different data in drop down than is actually validated

    - by Memitim
    How can I provide a user with a drop-down menu in a cell that displays the contents from one column but actually writes the value from a different column to the cell and validates against the values from that second column? I have a bit of code that very nearly does this (credit: DV0005 from the Contextures site): Private Sub Worksheet_Change(ByVal Target As range) On Error GoTo errHandler If Target.Cells.Count > 1 Then GoTo exitHandler If Target.Column = 10 Then If Target.Value = "" Then GoTo exitHandler Application.EnableEvents = False Target.Value = Worksheets("Measures").range("B1") _ .Offset(Application.WorksheetFunction _ .Match(Target.Value, Worksheets("Measures").range("Measures"), 0) - 1, 1) End If The drop-down displays the values from one column, for example Column B, but when selected actually writes the value on the same row from Column C to the cell. However, data validation is actually validating against Column B, so if I manually enter something from Column C in the cell and try to move to another cell, data validation throws an error.

    Read the article

  • Recovering data from a corrupted disk

    - by r_honey
    I use an external harddisk to backup my data (it had 3 partitions). Last week when I plugged it in, the OS (Win 7) hung up and I had to force re-boot the machine. When I turned it back on, the system just did not detect the hard-disk. It was last Sunday and I had to give up after sometime. Now I return back next Sunday (today) and when I plug it back-in to the machine, the OS detects the disk as well as all the 3 partitions on it. But it says all 3 are unformatted and I cant access any of them. Is there any way to recover data from the 3 partitions (I tried PC File Recovery and Recuva from PiriForm but neither detected these partitions).

    Read the article

  • How to Upload Really Large Files to SkyDrive, Dropbox, or Email

    - by Matthew Guay
    Do you need to upload a very large file to store online or email to a friend? Unfortunately, whether you’re emailing a file or using online storage sites like SkyDrive, there’s a limit on the size of files you can use. Here’s how to get around the limits. Skydrive only lets you add files up to 50 MB, and while the Dropbox desktop client lets you add really large files, the web interface has a 300 MB limit, so if you were on another PC and wanted to add giant files to your Dropbox, you’d need to split them. This same technique also works for any file sharing service—even if you were sending files through email. There’s two ways that you can get around the limits—first, by just compressing the files if you’re close to the limit, but the second and more interesting way is to split up the files into smaller chunks. Keep reading for how to do both. Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • Coding in large chunks ... Code verification skills

    - by Andrew
    As a follow up to my prev question: What is the best aproach for coding in a slow compilation environment To recap: I am stuck with a large software system with which a TDD ideology of "test often" does not work. And to make it even worse the features like pre-compiled headers/multi-threaded compilation/incremental linking, etc is not available to me - hence I think that the best way out would be to add the extensive logging into the system and to start "coding in large chunks", which I understand as code for a two-three hours first (as opposed to 15-20 mins in TDD) - thoroughly eyeball the code for a 15 minutes and only after all that do the compilation and run the tests. As I have been doing TDD for a quite a while, my code eyeballing / code verification skills got rusty (you don't really need this that much if you can quickly verify what you've done in 5 seconds by running a test or two) - so I am after a recommendations on how to learn these source code verification/error spotting skills again. I know I was able to do that easily some 5-10 years ago when I din't have much support from the compiler/unit testing tools I had until recently, thus there should be a way to get back to the basics.

    Read the article

  • Large invoice database structure and rendering

    - by user132624
    Our client has a MS SQL database that has 1 million customer invoice records in it. Using the database, our client wants its customers to be able to log into a frontend web site and then be able to view, modify and download their company’s invoices. Given the size of the database and the large number of customers who may log into the web site at any time, we are concerned about data base engine performance and web page invoice rendering performance. The 1 million invoice database is for just 90 days sales, so we will remove invoices over 90 days old from the database. Most of the invoices have multiple line items. We can easily convert our invoices into various data formats so for example it is easy for us to convert to and from SQL to XML with related schema and XSLT. Any data conversion would be done on another server so as not to burden the web interface server. We have tentatively decided to run the web site on a .NET Framework IIS web server using MS SQL on MS Azure. How would you suggest we structure our database for best performance? For example, should we put all the invoices of all customers located within the same 5 digit or 6 digit zip codes into the same table? Or could we set up a separate home directory for each customer on IIS and place each customer’s invoices in each customer’s home directory in XML format? And secondly what would you suggest would be the best method to render customer invoices on a web page and allow customers to modify for best performance? The ADO.net XML Data Set looks intriguing to us as a method, but we have never used it.

    Read the article

  • Designing a large database with multiple sources

    - by CatchingMonkey
    I have been tasked with redesigning, or at worst optimising the structure of a database for a data warehouse. Currently, the database has 4 other source databases (which is due to expand to X number of others), all of which have their own data structures, naming conventions etc. At the moment an overnight SSIS package pulls the data from the various source and then for each source coverts the data into a standardised, usable format. These tables are then appended to each other creating a 60m row, 40 column beast!. This table is then used in a variety of ways from an OLAP cube to a web front end. The structure has been in place for a very long time, and the work I have been able to prove the advantages of normalisation, and this is the way I would like to go. The problem for me is that the overnight process takes so long I don't then want to spend additional time normalising the last table into something usable. Can anyone offer any insight or ideas into the best way to restructure or optimise the database efficiently? Edit: All the databases are MS SQL Server 2008 R2 Thanks in advance CM

    Read the article

  • JSP Include: one large bean or bean for each include

    - by shylynx
    I want to refactor a webapp that consists of very distorted JSPs and servlets. Because we can't switch to a web framework easily we have to keep JSPs and Servlets, and now we are in doubt how to include pages into another and how to setup the use:bean-directives effectively. At the first step we want to decouple the code for the core-actions and the bean-creation into servlets. The servlets should forward to their corresponding pages, which should use the bean. The problem here is, that each jsp consists of different sub- and sub-sub-jsp that are included into another. Here is a shortend extract (because reality is more complex): head header top navigation actionspanel main header actionspanel foot footer Moreover each jsp (also the header and footer) use dynamic data. For example title and actionspanel can change on each page-reload or do have links and labels that depend on the processing by the preceding servlet. I know that jsp-include-directives should only be used for static content und should be avoided for dynamic content. But here we have very large pages, that consist of many parts. Now the core questions: Should I use one big bean for each page, so that each bean holds also data for header and footer beside its core data, so that each subsequent included jsp uses the same bean-directive? For example: DirectoryJSP <- DirectoryBean CompareJSP <- CompareBean Or should I use one bean for each jsp, so that each bean only holds the data for one jsp and its own purpose. For example: DirectoryJSP <- DirectoryBean HeaderJSP <- HeaderBean FooterJSP <- FooterBean CompareJSP <- CompareBean HeaderJSP <- HeaderBean FooterJSP <- FooterBean In the second case: should the subsequent beans be a member of the corresponding parent bean, so that only the parent bean is attached as attribute to the request? Or should each bean attached to the request?

    Read the article

  • Data migration - dangerous or essential?

    - by MRalwasser
    The software development department of my company is facing with the problem that data migrations are considered as potentially dangerous, especially for my managers. The background is that our customers are using a large amount of data with poor quality. The reasons for this is only partially related to our software quality, but rather to the history of the data: Most of them have been migrated from predecessor systems, some bugs caused (mostly business) inconsistencies in the data records or misentries by accident on the customer's side (which our software allowed by error). The most important counter-arguments from my managers are that faulty data may turn into even worse data, the data troubles may awake some managers at the customer and some processes on the customer's side may not work anymore because their processes somewhat adapted to our system. Personally, I consider data migrations as an integral part of the software development and that data migration can been seen to data what refactoring is to code. I think that data migration is an essential for creating software that evolves. Without it, we would have to create painful software which somewhat works around a bad data structure. I am asking you: What are your thoughts to data migration, especially for the real life cases and not only from a developer's perspecticve? Do you have any arguments against my managers opinions? How does your company deal with data migrations and the difficulties caused by them? Any other interesting thoughts which belongs to this topics?

    Read the article

  • Data binding in web UI frameworks, what's the deal?

    - by c-smile
    I believe that most of modern Web frameworks that pretend to be MVC ones also has a notion of data binding in one form or another. Examples: AngularJS, EmberJS, KnockoutJS, etc. I am assuming that "data binding" is a declarative definition (oxymoron, no?) of live link between data (a.k.a. model) and its representation (a.k.a. view). With some transformers in between (a.k.a. controllers). I understand why declarativeness is kind of appealing but also understand that as usual it comes with the price. In particular: 1. Live binding is quite heavy, either with dirty watch (high CPU consumption) or with Object.observe() (high memory consumption with high CPU load in some scenarios). 2. There is a "frame" part in the framework word, means there are some boundaries/limits that can be hard to overcome if you need slightly more than it was designed for. Quite usual time split: 90% of features are made in 10% of project time. But 10% rest take 90% of project time. I suspect (a.k.a. educated guess) that those MVC things are not helping to implement more functionality in less time... If so their usage motivation is not quite clear. As an example: last week wanted to find virtual list idea/solution. Found one in vanilla JavaScript that is 120 LOC. Implementation of the same but in AngualrJS is about 420 LOC. Most of the code there seems like a fight with the framework itself... So is my question: what benefits that MVC stuff or data binding give us? Is it just a buzzword popular among project managers or they give us something useful. If later one then what exactly?

    Read the article

  • Data migration - dangerous or essential?

    - by MRalwasser
    The software development department of my company is facing with the problem that data migrations are considered as potentially dangerous, especially for my managers. The background is that our customers are using a large amount of data with poor quality. The reasons for this is only partially related to our software quality, but rather to the history of the data: Most of them have been migrated from predecessor systems, some bugs caused (mostly business) inconsistencies in the data records or misentries by accident on the customer's side (which our software allowed by error). The most important counter-arguments from my managers are that faulty data may turn into even worse data, the data troubles may awake some managers at the customer and some processes on the customer's side may not work anymore because their processes somewhat adapted to our system. Personally, I consider data migrations as an integral part of the software development and that data migration can been seen to data what refactoring is to code. I think that data migration is an essential for creating software that evolves. Without it, we would have to create painful software which somewhat works around a bad data structure. I am asking you: What are your thoughts to data migration, especially for the real life cases and not only from a developer's perspecticve? Do you have any arguments against my managers opinions? How does your company deal with data migrations and the difficulties caused by them? Any other interesting thoughts which belongs to this topics?

    Read the article

  • jQuery with SharePoint solutions

    - by KunaalKapoor
    For me jQuery is the 'Plan-B' for everything.And most of my projects include the use of jQuery for something or the other, so I decided to write a small note on what works best while using jQuery along with SharePoint.I prefer to use the jQuery JavaScript library, which is far more robust, easier to use, and allows for plugins. Follow the steps below to add jQuery to your master page. For office 365, the prefered location to add jQuery files is the "Site Asserts" library.Deployment Best PracticesThey are only as good as the context it’s being referenced.  In other words, take into account your world before applying it.Script your deployment options.  Folder in SPD. Use the file system.  Make external references.  The JQuery library is on the Microsoft Ajax Content Delivery Network. You may even choose to publish to and from the document library. (pros and cons to this approach)Reference options when referencing the script.ScriptLink will make sure it’s loaded at the top of the page and only loaded once. You need Visual Studio or SPDContent Editor Web Part (CEWP).  Drop it on the page and it’s there.  Easy but dangerousCustom Actions. Great for global deployments of JQuery.  Loads it on every page. It also works in Sandbox installations.Deployment Maintenance Dont’sDon’t add scripts directly to your Master Page. That’s way too much effort because the pages are hard to maintain.Don’t add scripts directly to the CEWP.  Use a content link instead. That will allow for reuse. If you or someone deletes the CEWP you won’t lose code in the web partSecurity.  Any scripts run with the same privileges of the current user.  In other words, you can’t get in trouble.Development Best PracticesDon’t abuse the DOM.  There are better options to load the DOM without hitting it 1,000 times.User other performance boosters.Try other libraries.  Try some custom codeAvoid String conversionMinify your filesUse CAML to reduce number of returns rowsOnly update your JQuery library AFTER RIGOROUS REGRESSION TESTINGCRUD operations can come with some funSP Services wraps SharePoint’s web services for executionThe Bing SDK is pretty easy to use.  You can add it to your page with a script,  put it into a content editor web part and connect it from the address parameters in a list.Steps:1. Go to jquery.com and download the latest jQuery library to your desktop. You want to get the compressed production version, not the development version.2. Open SharePoint Designer (SPD) and connect to the root level of your site's site collection.In SPD, open the "Style Library" folder. Create a folder named "Scripts" inside of the Style Library. Drag the jQuery library JavaScript file from your desktop into the Scripts folder.In the Scripts folder, create a new JavaScript file and name it (e.g. "actions.js").3. If you are using visual studio add a folder for js, you can create a new folder at the root level or if you prefer more cleaner solutions like me, you can use the layouts folder which cleans out on deactivation/uninstall.4. Within the <head> tag of the master page, add a script reference to the jQuery library just above the content place holder named "PlaceHolderAdditonalPageHead" (and above your custom CSS references, if applicable) as follows:<script src="/Style%20Library/Scripts/{jquery library file}.js" type="text/javascript"></script>Immediately after the jQuery library reference add a script reference to your custom scripts file as follows:<script src="/Style%20Library/Scripts/actions.js" type="text/javascript"></script>Inside your script tag, you can test if jQuery is already defined and if not, then add it to the page.<script type='text/javascript'>  if (typeof jQuery == 'undefined')    document.write('<scr'+'ipt type="text/javascript" src="http://code.jquery.com/jquery-1.6.1.min.js"></sc'+'ript>');</script>For the inquisitive few... Read on if you'd like :)Why jQuery on SharePoiny is AwesomeIt’s all about that visual wow factor.  You can get past that, “But it looks like SharePoint”  Take a long list view and put it into JQuery with pagination, etc and you are the hero.  It’s also about new controls you get with JQuery that you couldn’t do before.Why jQuery with SharePoint should be AwfulAlthough it’s fairly easy to get jQuery up and running. Copy/Paste can cause a problem.  If you don’t understand what it’s doing in the Client Object Model and the Document Object Model then it will do things on your site that were completely unexpected. Many blogs will note workarounds they employed on their sites. Why it’s not working: Debugging “sucks”.You need to develop small blocks of functionality, Test it by putting in some alerts  and console.log. Set breakpoints and monitor the DOM via Firebug and some IE development toolsPerformance - It happens all the time. But you should look at the tradeoffs. More time may give you more functionality.Consistency - ”But it works fine on my computer. So test on many browsers.  Take into account client resourcesHarm the Farm -  You need to code wisely and negatively test.  Don’t be the cause of a DoS attack that’s really JQuery asking for a resource over and over and over again.  So code wisely. Do negative testing. Monitor Server Resources.They also did a demo where JQuery did an endless loop to pull data from a list. It’s a poor decision but also an easy mistake.  They spiked their server resources within a couple seconds and had to shut down the call before it brought it down.ConclusionJQuery is now another tool in your tool kit. You don’t have to use it. Use it where it makes sense and where it helps you get your job done.Don’t abuse it, you will pay for it laterIt will add to page bloat so take that into accountIt can slow your performance

    Read the article

  • OpenGL ES 2.0 texture distortion on large geometry

    - by Spruce
    OpenGL ES 2.0 has serious precision issues with texture sampling - I've seen topics with a similar problem, but I haven't seen a real solution to this "distorted OpenGL ES 2.0 texture" problem yet. This is not related to the texture's image format or OpenGL color buffers, it seems like it's a precision error. I don't know what specifically causes the precision to fail - it doesn't seem like it's just the size of geometry that causes this distortion, because simply scaling vertex position passed to the the vertex shader does not solve the issue. Here are some examples of the texture distortion: Distorted Texture (on OpenGL ES 2.0): http://i47.tinypic.com/3322h6d.png What the texture normally looks like (also on OpenGL ES 2.0): http://i49.tinypic.com/b4jc6c.png The texture issue is limited to small scale geometry on OpenGL ES 2.0, otherwise the texture sampling appears normal, but the grainy effect gradually worsens the further the vertex data is from the origin of XYZ(0,0,0) These texture issues do not occur on desktop OpenGL (works fine under Windows XP, Windows 7, and Mac OS X) I've only seen the problem occur on Android, iPhone, or WebGL(which is similar to OpenGL ES 2.0) All textures are power of 2 but the problem still occurs Scaling the vertex data - The values of a vertex's X Y Z location are in the range of: -65536 to +65536 floating point I realized this was large, so I tried dividing the vertex positions by 1024 to shrink the geometry and hopefully get more accurate floating point precision, but this didn't fix or lessen the texture distortion issue Scaling the modelview or scaling the projection matrix does not help Changing texture filtering options does not help Disabling mipmapping, or using GL_NEAREST/GL_LINEAR does nothing Enabling/disabling anisotropic does nothing The banding effect still occurs even when using GL_CLAMP Dividing the texture coords passed to the vertex shader and then multiplying them back to the correct values in the fragment shader, also does not work precision highp sampler2D, highp float, highp int - in the fragment or the vertex shader didn't change anything (lowp/mediump did not work either) I'm thinking this problem has to have been solved at one point - Seeing that OpenGL ES 2.0 -based games have been able to render large-scale, highly detailed geometry

    Read the article

  • SharePoint 2010 Diagnostic Studio Remote Diag

    - by juanlarios
    I have had some time this week to try out some tools that I have been meaning to try out. This week I am trying out the SP 2010 Diagnostic Studio. I installed it successfully and tried it on my development evironment. I was able to build a report and a snapshot of the environment. I decided to turn my attention to my Employer's intranet environment. This would allow me to analyze it and measure it against benchmarks. I didn't want to install the Diagnostic studio on the Production Envorinment, lucky for me, the Diagnostic studio can be run remotely, well...kind of. Issue My development environment is a stand alone, full installation of SharePoint 2010 Server. It has Office 2010, SQL 2008 Enterprise, a DC...well you get the point, it's jammed packed! But more importantly it's a stand alone, self contained VM environment. Well Microsoft has instructions as to how to connect remotely with Diagnostic Studio here. The deciving part of this is that the SP2010DS prompts you for credentails. So I thought I was getting the right account to run the reports. I tried all the Power Shell commands in the link above but I still ended up getting the following errors: 06/28/2011 12:50:18    Connecting to remote server failed with the following error message : The WinRM client cannot process the request...If the SPN exists, but CredSSP cannot use Kerberos to validate the identity of the target computer and you still want to allow the delegation of the user credentials to the target computer, use gpedit.msc and look at the following policy: Computer Configuration -> Administrative Templates -> System -> Credentials Delegation -> Allow Fresh Credentials with NTLM-only Server Authentication.  Verify that it is enabled and configured with an SPN appropriate for the target computer. For example, for a target computer name "myserver.domain.com", the SPN can be one of the following: WSMAN/myserver.domain.com or WSMAN/*.domain.com. Try the request again after these changes. For more information, see the about_Remote_Troubleshooting Help topic. 06/28/2011 12:54:47    Access to the path '\\<targetserver>\C$\Users\<account logging in>\AppData\Local\Temp' is denied. You might also get an error message like this: The WinRM client cannot process the request. A computer policy does not allow the delegation of the user credentials to the target computer. Explanation After looking at the event logs on the target environment, I noticed that there were a several Security Exceptions. After looking at the specifics around who was denied access, I was able to see the account that was being denied access, it was the client machine administrator account. Well of course that was never going to work!!! After some quick Googling, the last error message above will lead you to edit the Local Group Policy on the client server. And although there are instructions from microsoft around doing this, it really will not work in this scenario. Notice the Description and how it only applices to authentication mentioned? Resolution I can tell you what I did, but I wish there was a better way but I simply don't know if it's duable any other way. Because my development environment had it's own DC, I didn't really want to mess with Kerberos authentication. I would also not be smart to connect that server to the domain, considering it has it's own DC. I ended up installing SharePoint 2010 Diagnostic Studio on another Windows 7 Dev environment I have, and connected the machien to the domain. I ran all the necesary remote credentials commands mentioned here. Those commands add the group policy for you! Once I did this I was able to authenticate properly and I was able to get the reports. Conclusion   You can run SharePoint 2010 Diagnostic Studio Remotely but it will require some specific scenarions. A couple of things I should mention is that as far as I understand, SP2010 DS, will install agents on your target environment to run tests and retrieve the data. I was a Farm Administrator, and also a Server Admin on SharePoint Server. I am not 100% sure if you need all those permissions but I that's just what I have to my internal intranet.   I deally I would like to have a machine that I can have SharePoint 2010 DIagnostic Studio installed and I can run that against client environments. It appears that I will not be able to do that, unless I enable Kerberos on my Windows 7 Machine now. If you have it installed in the same way I would like to have it, please let me know, I'll keep trying to get what I'm after. Hope this helps someone out there doing the same.

    Read the article

  • Data indexing frameworks fit for large E-Commerce applications

    - by Dabu
    we wrote and still maintain a large E-Commerce application. Our feature list resembles what you would expect from most shops. We'd like to improve some of our features, and now the search/suggestion list functionality (enter some letters, a JScripted suggestion list appears) has caught our eye. Currently, we use http://xapian.org/. It has some drawbacks. Firstly, it's not actually the right solution. It has been created to index documents, not ever-changing data in a granularity that an E-Commerce application would need. Secondly, the load on the database is significant when we reindex all data every night. We'd like a framework that has been designed for indexing database data, which can add to the index easily and without much load, which can supply data changes in the backoffice quickly to the frontend without much load and delay. I'm aware of the fact that Xapian is Open Source and even Free Software, so we could adapt it to our needs if we decided to invest the time and manpower. But taking a quick look around for a solution more suited seems fair, right? Oh, and commercial applications are fine, too. FOSS is not required. Thanks a bunch.

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >