Search Results

Search found 70675 results on 2827 pages for 'master data management'.

Page 232/2827 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • Hiding a file or data from being accessed unless on scheduled days [closed]

    - by gkt.pro
    Possible Duplicate: restricting access to volumes disk even for admin account windows How to restrict use of a computer? I want to limit my access to some data and what I want is that I should be able to access the data only on certain days of the month (e.g., every 3rd day). Is there any way like encryption or some utility to allow me to only access data on specific days? One idea that I was thinking of was to encrypt the data and store the password (will be complex and long so that I couldn't remember it right away) on some website which would then email me back the password in future on those specific days.

    Read the article

  • How to turn on/off code modules?

    - by Safran Ali
    I am trying to run multiple sites using single code base and code base consist of the following module (i.e. classes) User module Q & A module Faq module and each class works on MVC pattern i.e. it consist of Entity class Helper class (i.e. static class) View (i.e. pages and controls) and let's say I have 2 sites site1.com and site2.com. And I am trying to achieve following functionality site1.com can have User, Q & A and Faq module up and running site2.com can have User and Q & A module live while Faq module is switched off but it can be turned-on if needed, so my query here is what is the best way to achieve such functionality Do I introduce a flag bit that I check on every page and control belonging to that module? It's more like CMS where you can turn on/off different features. I am trying to get my head around it, please provide me with an example or point out if I am taking the wrong approach.

    Read the article

  • New security configuration flag in UCM PS3

    - by kyle.hatlestad
    While the recent Patch Set 3 (PS3) release was mostly focused on bug fixes and such, a new configuration flag was added for security. In 10gR3 and prior versions, UCM had a component called Collaboration Manager which allowed for project folders to be created and groups of users assigned as members to collaborate on documents. With this component came access control lists (ACL) for content and folders. Users could assign specific security rights on each and every document and folder within a project. And it was possible to enable these ACL's without having the Collaboration Manager component enabled. But it took some special instructions (see technote# 603148.1) and added some extraneous pieces still related to Collaboration Manager. When 11g came out, Collaboration Manager was no longer available. But the configuration settings to turn on ACLs were still there. Well, in PS3 they've been cleaned up a bit and a new configuration flag has been added to simply turn on the ACL fields and none of the other collaboration bits. To enable ACLs: UseEntitySecurity=true Along with this configuration flag to turn ACLs on, you also need to define which Security Groups will honor the ACL fields. If an ACL is applied to a content item with a Security Group outside this list, it will be ignored. SpecialAuthGroups=HumanResources,Legal,Marketing Save the settings and restart the instance. Upon restart, two new metadata fields will be created: xClbraUserList, xClbraAliasList. If you are using OracleTextSearch as the search indexer, be sure to run a Fast Rebuild on the collection. On the Check In, Search, and Update pages, values are added by simply typing in the value and getting a type-ahead list of possible values. Select the value, click Add and then set the level of access (Read, Write, Delete, or Admin). If all of the fields are blank, then it simply falls back to just Security Group and Account access. As for how they are stored in the metadata fields, each entry starts with it's identifier: ampersand (&) symbol for users, "at" (@) symbol for groups, and colon (:) for roles. Following that is the entity name. And at the end is the level of access in paranthesis. e.g. (RWDA). And each entry is separated by a comma. So if you were populating values through batch loader or an external source, the values would be defined this way. Detailed information on Access Control Lists can be found in the Oracle Fusion Middleware System Administrator's Guide for Oracle Content Server.

    Read the article

  • T-SQL Tuesday #005 : SSRS Parameters and MDX Data Sets

    - by blakmk
    Well it this weeks  T-SQL Tuesday #005  topic seems quite fitting. Having spent the past few weeks creating reports and dashboards in SSRS and SSAS 2008, I was frustrated by how difficult it is to use custom datasets to generate parameter drill downs. It also seems Reporting Services can be quite unforgiving when it comes to renaming things like datasets, so I want to share a couple of techniques that I found useful. One of the things I regularly do is to add parameters to the querys. However doing this causes Reporting Services to generate a hidden dataset and parameter name for you. One of the things I like to do is tweak these hidden datasets removing the ‘ALL’ level which is a tip I picked up from Devin Knight in his blog: There are some rules i’ve developed for myself since working with SSRS and MDX, they may not be the best or only way but they work for me. Rule 1 – Never trust the automatically generated hidden datasets Or even ANY, automatically generated MDX queries for that matter.... I’ve previously blogged about this here.   If you examine the MDX generated in the hidden dataset you will see that it generates the MDX in the context of the originiating query by building a subcube, this mean it may NOT be appropriate to use this in a subsequent query which has a different context. Make sure you always understand what is going on. Often when i’m developing a dashboard or a report there are several parameter oriented datasets that I like to manually create. It can be that I have different datasets using the same dimension but in a different context. One example of this, is that I often use a dataset for last month and a dataset for the last 6 months. Both use the same date hierarchy. However Reporting Services seems not to be too smart when it comes to generating unique datasets when working with and renaming parameters and datasets. Very often I have come across this error when it comes to refactoring parameter names and default datasets. "an item with the same key has already been added" The only way I’ve found to reliably avoid this is to obey to rule 2. Rule 2 – Follow this sequence when it comes to working with Parameters and DataSets: 1.    Create Lookup and Default Datasets in advance 2.    Create parameters (set the datasets for available and default values) 3.    Go into query and tick parameter check box 4.    On dataset properties screen, select the parameter defined earlier from the parameter value defined earlier. Rule 3 – Dont tear your hair out when you have just renamed objects and your report doesn’t build Just use XML notepad on the original report file. I found I gained a good understanding of the structure of the underlying XML document just by using XML notepad. From this you can do a search and find references of the missing object. You can also just do a wholesale search and replace (after taking a backup copy of course ;-) So I hope the above help to save the sanity of anyone who regularly works with SSRS and MDX.   @Blakmk

    Read the article

  • Survey: How much data do you work with?

    - by James Luetkehoelter
    Andy isn't the only one that can ask a survey question. This is something I really curious about because many of the answers or recommendations or rants in blogs are not universably applicable to every database - small databases must sometimes be treated differently, and uber databases are just a pain (and fun at the same time). So, how would you classify most of the databases you work with: 1) Up to 50GB 2) 50-500GB 3) 500GB - 2TB 4) DEAR GOD THAT"S TOO MUCH INFORMATION! Share this post: email it!...(read more)

    Read the article

  • xml file save/read error (making a highscore system for XNA game)

    - by Eddy
    i get an error after i write player name to the file for second or third time (An unhandled exception of type 'System.InvalidOperationException' occurred in System.Xml.dll Additional information: There is an error in XML document (18, 17).) (in highscores load method In data = (HighScoreData)serializer.Deserialize(stream); it stops) the problem is that some how it adds additional "" at the end of my .dat file could anyone tell me how to fix this? the file before save looks: <?xml version="1.0"?> <HighScoreData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <PlayerName> <string>neil</string> <string>shawn</string> <string>mark</string> <string>cindy</string> <string>sam</string> </PlayerName> <Score> <int>200</int> <int>180</int> <int>150</int> <int>100</int> <int>50</int> </Score> <Count>5</Count> </HighScoreData> the file after save looks: <?xml version="1.0"?> <HighScoreData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <PlayerName> <string>Nick</string> <string>Nick</string> <string>neil</string> <string>shawn</string> <string>mark</string> </PlayerName> <Score> <int>210</int> <int>210</int> <int>200</int> <int>180</int> <int>150</int> </Score> <Count>5</Count> </HighScoreData>> the part of my code that does all of save load to xml is: DECLARATIONS PART [Serializable] public struct HighScoreData { public string[] PlayerName; public int[] Score; public int Count; public HighScoreData(int count) { PlayerName = new string[count]; Score = new int[count]; Count = count; } } IAsyncResult result = null; bool inputName; HighScoreData data; int Score = 0; public string NAME; public string HighScoresFilename = "highscores.dat"; Game1 constructor public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; Width = graphics.PreferredBackBufferWidth = 960; Height = graphics.PreferredBackBufferHeight =640; GamerServicesComponent GSC = new GamerServicesComponent(this); Components.Add(GSC); } Inicialize function (end of it) protected override void Initialize() { //other game code base.Initialize(); string fullpath =Path.Combine(HighScoresFilename); if (!File.Exists(fullpath)) { //If the file doesn't exist, make a fake one... // Create the data to save data = new HighScoreData(5); data.PlayerName[0] = "neil"; data.Score[0] = 200; data.PlayerName[1] = "shawn"; data.Score[1] = 180; data.PlayerName[2] = "mark"; data.Score[2] = 150; data.PlayerName[3] = "cindy"; data.Score[3] = 100; data.PlayerName[4] = "sam"; data.Score[4] = 50; SaveHighScores(data, HighScoresFilename); } } all methods for loading saving and output public static void SaveHighScores(HighScoreData data, string filename) { // Get the path of the save game string fullpath = Path.Combine("highscores.dat"); // Open the file, creating it if necessary FileStream stream = File.Open(fullpath, FileMode.OpenOrCreate); try { // Convert the object to XML data and put it in the stream XmlSerializer serializer = new XmlSerializer(typeof(HighScoreData)); serializer.Serialize(stream, data); } finally { // Close the file stream.Close(); } } /* Load highscores */ public static HighScoreData LoadHighScores(string filename) { HighScoreData data; // Get the path of the save game string fullpath = Path.Combine("highscores.dat"); // Open the file FileStream stream = File.Open(fullpath, FileMode.OpenOrCreate, FileAccess.Read); try { // Read the data from the file XmlSerializer serializer = new XmlSerializer(typeof(HighScoreData)); data = (HighScoreData)serializer.Deserialize(stream);//this is the line // where program gives an error } finally { // Close the file stream.Close(); } return (data); } /* Save player highscore when game ends */ private void SaveHighScore() { // Create the data to saved HighScoreData data = LoadHighScores(HighScoresFilename); int scoreIndex = -1; for (int i = 0; i < data.Count ; i++) { if (Score > data.Score[i]) { scoreIndex = i; break; } } if (scoreIndex > -1) { //New high score found ... do swaps for (int i = data.Count - 1; i > scoreIndex; i--) { data.PlayerName[i] = data.PlayerName[i - 1]; data.Score[i] = data.Score[i - 1]; } data.PlayerName[scoreIndex] = NAME; //Retrieve User Name Here data.Score[scoreIndex] = Score; // Retrieve score here SaveHighScores(data, HighScoresFilename); } } /* Iterate through data if highscore is called and make the string to be saved*/ public string makeHighScoreString() { // Create the data to save HighScoreData data2 = LoadHighScores(HighScoresFilename); // Create scoreBoardString string scoreBoardString = "Highscores:\n\n"; for (int i = 0; i<5;i++) { scoreBoardString = scoreBoardString + data2.PlayerName[i] + "-" + data2.Score[i] + "\n"; } return scoreBoardString; } when ill make this work i will start this code when i call game over (now i start it when i press some buttons, so i could test it faster) public void InputYourName() { if (result == null && !Guide.IsVisible) { string title = "Name"; string description = "Write your name in order to save your Score"; string defaultText = "Nick"; PlayerIndex playerIndex = new PlayerIndex(); result= Guide.BeginShowKeyboardInput(playerIndex, title, description, defaultText, null, null); // NAME = result.ToString(); } if (result != null && result.IsCompleted) { NAME = Guide.EndShowKeyboardInput(result); result = null; inputName = false; SaveHighScore(); } } this where i call output to the screen (ill call this in highscores meniu section when i am done with debugging) spriteBatch.DrawString(Font1, "" + makeHighScoreString(),new Vector2(500,200), Color.White); }

    Read the article

  • What is Granularity?

    - by tonyrogerson
    Granularity defines “the lowest level of detail”; but what is meant by “the lowest level of detail”? Consider the Transactions table below: create table Transactions ( TransactionID int not null primary key clustered, TransactionDate date not null, ClientID int not null, StockID int not null, TransactionAmount decimal ( 28 , 2 ) not null, CommissionAmount decimal ( 28 , 5 ) not null ) A Client can Trade in one or many Stocks on any date – there is no uniqueness to ClientID, Stock and TransactionDate...(read more)

    Read the article

  • Is there a keybind to minimize all windows, without a toggle?

    - by George Marian
    I know about the show desktop keybind (default Ctrl+Alt+D), which I use often enough. However, I'm looking for a way to minimize all windows without activating "show desktop". I'm on a default install (i.e Gnome, Metacity & Compiz). I've looked through all the locations to configure keybinds, that I know. I've also looked at the default keybind list in the Ubuntu wiki and in the Compiz wiki. (Not to mention, searching here.) I'm interested in knowing where it is available, if not in Gnome/Metacity/Compiz, or some other way to accomplish this with a keybind.

    Read the article

  • How should I log time spent on multiple tasks?

    - by xenoterracide
    In Joel's blog on evidence based scheduling he suggests making estimates based on the smallest unit of work and logging extra work back to the original task. The problem I'm now experiencing is that I'll have create object A with subtask method A which creates object B and test all of the above. I create tasks for each of these that seems to be resulting in ok-ish estimates (need practice), but when I go to log work I find that I worked on 4 tasks at once because I tweak method A and find a bug in the test and refactor object B all while coding it. How should I go about logging this work? should I say I spent, for example, 2 hours on each of the 4 tasks I worked on in the 8 hour day?

    Read the article

  • silent failure while creating odbc data source

    - by Peter
    I just got really confused trying to create an ODBC data source in Windows 2003 R2. I can create a connection to my chosen server (a MS SQL Server) on the "user DSN" tab, but when I try to do the same thing on the "system DSN tab", the process fails but without an error message. I am able to connect to the target database fine at the end of configuring a new data source, but when I click OK, the data source just isn't there. No error message, no sign that anything went amiss, other than the lack of a new data source. Very annoying, as I had to repeat the process a few times to make sure I wasn't crazy. Anybody got any hints? I suspect it is a permission problem of some sort but since there is no error message, I don't know where to start.

    Read the article

  • How to determine if a package is a meta-package from the command line?

    - by cirosantilli
    How can I determine if a package is a meta-package from the command line, possibly via apt-get, aptitude or apt-cache? I have tried: apt-cache show texlive-full apt-cache showpkg texlive-full but the only way I can tell this package is meta is by reading the "en-description" field. Is there a more automatic way of doing this, that will give me a yes/no response, or at least have a field such as then "en-description" dedicated to this?

    Read the article

  • Package System Broken AMD FGLRX Unmet Dependencies

    - by fleamour
    I uninstalled propriety ATI driver to check if it would fix wrong Plymouth resolution. It did. But now I cannot reinstall either propriety drivers listed. Also package system has broken with this error: The package system is broken If you are using third party repositories then disable them, since they are a common source of problems. Now run the following command in a terminal: apt-get install -f The following packages have unmet dependencies: fglrx-amdcccle: Depends: fglrx but it is not installed Depends: libqtcore4 (= 4:4.5.3) but 4:4.8.1-0ubuntu4.1 is installed Depends: libqtgui4 (= 4:4.5.3) but 4:4.8.1-0ubuntu4.1 is installed What should I do?

    Read the article

  • What is the problem git submodules are supposed to solve?

    - by Joshua Dance
    What is the problem that git submodules solve well? When should I use them? Or rather what is their use case? The only use of submodules that I have seen 'in the wild' has been when used to share code between multiple repositories. From what I have experienced, submodules do not appear to be ideally suited to this use case. You run into git update submodule woes and your history gets filled with updating submodule pointer commits. If the 'sharing code' use case is not best solved by submodules, what problems are?

    Read the article

  • Which software development methodologies can be seen as foundations

    - by Bas
    I'm writing a small research paper which involves software development methodologiess. I was looking into all the available methodology's and I was wondering, from all methodologies, are there any that have provided the foundations for the others? For an example, looking at the following methodologies: Agile, Prototyping, Cleanroom, Iterative, RAD, RUP, Spiral, Waterfall, XP, Lean, Scrum, V-Model, TDD. Can we say that: Prototyping, Iterative, Spiral and Waterfall are the "foundation" for the others? Or is there no such thing as "foundations" and does each methodology has it's own unique history? I would ofcourse like to describe all the methodology's in my research paper, but I simply don't have the time to do so and that is why I would like to know which methodologies can be seen as representatives.

    Read the article

  • Advantages And Disadvantages Of Resource Serialization

    - by CPP_Person
    A good example is let's say I'm making a pong game. I have a PNG image for the ball and another PNG image for the paddles. Now which would be better, loading the PNG images with a PNG loader, or loading them in a separate program, serializing it, and de-serializing it in the game itself for use? The reason why this may be good to know is because it seems like game companies (or anyone in the long run) build all of their resources into some sort of file. For example, in the game Fallout: New Vegas the DLCs are loaded as a .ESM file, which includes everything it needs, all the game does is find it, serialize it, and it has the resources. Games like Penumbra: Black Plague take a different approch and add a folder which contains all the textures, sounds, scrips, ect that it needs, but not serialized (it does this with the game itself, and the DLC). Which is the better approch and why?

    Read the article

  • Unable to install any packages due to unrecoverable fatal error /var/lib/dpkg/diversions

    - by gforces
    I am currently unable to install any packages. I receive the following error: dpkg: unrecoverable fatal error, aborting: too-long line or missing newline in `/var/lib/dpkg/diversions I have tried various approaches: sudo dpkg --configure -a sudo apt-get clean sudo dpkg-divert --list sudo apt-get check sudo apt-get install -f etc. but all to no avail. Either the output was apparently normal or the error above was thrown. I'm stumped as to how to proceed and would appreciate any assistance. If any further info is required just ask. Thanks for the reply. I followed the suggestions and am now getting a different error: (Reading database ... 50%dpkg: unrecoverable fatal error, aborting: files list file for package 'libksane0' is missing final newline E: Sub-process /usr/bin/dpkg returned an error code (2) Here is the link for the current diversions: http://paste.ubuntu.com/823500/ and the old broken one: http://paste.ubuntu.com/823502/ I tried to reinstall libksane0 but the same error occurred.

    Read the article

  • always purge on remove in package

    - by Jeroen
    Is there a way I can set my package to always automatically include a --purge whenever a user does a regular apt-get remove mypackage ? The reason is that I have some config files in /etc that should really be removed when the binaries are removed, otherwise it can lead to weird behavior. However I still want to treat them as conf files, i.e. make sure apt does not just wipe them on every upgrade of the package. But they should be removed when the package is uninstalled. Currently I am manually deleting the /etc conf files in my postrm script. However I just found out that this has a unfortunate side effect: if the user uninstalls and then later re-installs the package, the conf files won't be reinstalled, because apt thinks that they are still there. So is there a way I can manually trigger a full 'purge' in my postrm, such that apt knows that the conf files are gone?

    Read the article

  • Windows Azure Emulators On Your Desktop

    - by BuckWoody
    Many people feel they have to set up a full Azure subscription online to try out and develop on Windows Azure. But you don’t have to do that right away. In fact, you can download the Windows Azure Compute Emulator – a “cloud development environment” – right on your desktop. No, it’s not for production use, and no, you won’t have other people using your system as a cloud provider, and yes, there are some differences with Production Windows Azure, but you’ll be able code, run, test, diagnose, watch, change and configure code without having any connection to the Internet at all. The best thing about this approach is that when you are ready to deploy the code you’ve been testing, a few clicks deploys it to your subscription when you make one.   So what deep-magic does it take to run such a thing right on your laptop or even a Virtual PC? Well, it’s actually not all that difficult. You simply download and install the Windows Azure SDK (you can even get a free version of Visual Studio for it to run on – you’re welcome) from here: http://msdn.microsoft.com/en-us/windowsazure/cc974146.aspx   This SDK will also install the Windows Azure Compute Emulator and the Windows Azure Storage Emulator – and then you’re all set. Right-click the icon for Visual Studio and select “Run as Administrator”:    Now open a new “Cloud” type of project:   Add your Web and Worker Roles that you want to code:   And when you’re done with your design, press F5 to start the desktop version of Azure:   Want to learn more about what’s happening underneath? Right-click the tray icon with the Azure logo, and select the two emulators to see what they are doing:          In the configuration files, you’ll see a “Use Development Storage” setting. You can call the BLOB, Table or Queue storage and it will all run on your desktop. When you’re ready to deploy everything to Windows Azure, you simply change the configuration settings and add the storage keys and so on that you need.   Want to learn more about all this?   Overview of the Windows Azure Compute Emulator: http://msdn.microsoft.com/en-us/library/gg432968.aspx Overview of the Windows Azure Storage Emulator: http://msdn.microsoft.com/en-us/library/gg432983.aspx January 2011 Training Kit: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=413E88F8-5966-4A83-B309-53B7B77EDF78&displaylang=en      

    Read the article

  • How to concentrate on one project at a time. Divide and Conquer doesn't work for me [closed]

    - by refhat
    Possible Duplicate: Tips for staying focused and motivated on a project I have serious issues on concentrating on one project at a time. I cant even follow the Divide and Conquer Approach. Once I start a project, I try to get the things done as neatly as possible but very soon I end up messing so many components of it. I try to do divide and conquer, but my approach doesn't work smoothly, and then I then wonder here and there in other projects. Sometimes I try spending so many hours for some trivial issues, which in-fact are not even issues. How do I avoid this jargon and be a smooth developer and have a nice workflow around my projects. I tend to loose my concentration on the current project and wonder in another project.

    Read the article

  • Is there an RSS feed of Ubuntu Release torrent files

    - by Valorin
    A lot of torrent search engines have the ability to provide an RSS feed of matches, allowing you to set up torrent programs to download items which are published on the feed. This is useful for watching releases of things. The problem with doing this through a random search engine for Ubuntu is that you usually get too many torrents as it picks up different custom versions, and other software with the name in the title/description. See: http://www.mininova.org/rss/ubuntu So I was wondering if there was an RSS feed somewhere that only loads up the official torrent files for releases?

    Read the article

  • Fan always spinning on Maverick

    - by jb
    I use Dell Studio 1555, after update to maverick my fan started spinning full speed allways. I use xserver-xorg-video-ati/radeon drivers because fglrx messes with my desktop resolution when connecting external screen. Processor is mostly idle. U use some desktop effects. On the same configuration on Lucid my fan worked slower. It eats lots of battery time :( Bug filed: https://bugs.launchpad.net/ubuntu/+source/xserver-xorg-video-ati/+bug/675156 Any insights how to solve/debug it or work around it?

    Read the article

  • Unmet dependencies when trying to install openssh-server

    - by Sabiya
    I am running sudo-apt get install openssh-server on Ubuntu 12.04 64 bit system. It is giving me the following output. $ sudo apt-get install openssh-server Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: gcc-4.6 : Depends: libgcc1 (>= 1:4.6.3-1ubuntu5) but 1:4.6.2-14ubuntu2 is to be installed libc6 : Depends: libc-bin (= 2.15-0ubuntu10.1) libc6-dev : Depends: libc6 (= 2.15-0ubuntu10) but 2.15-0ubuntu10.1 is to be installed libgcc1 : Depends: gcc-4.6-base (= 4.6.2-14ubuntu2) but 4.6.3-1ubuntu5 is to be installed libxplc0.3.13:i386 : Depends: libc6:i386 (>= 2.5-5) but it is not going to be installed Depends: libgcc1:i386 (>= 1:4.2-20070516) but it is not going to be installed Depends: libstdc++6:i386 (>= 4.2-20070516) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).

    Read the article

  • Laptop shuts down upon waking from suspend

    - by Bryan Head
    The computer enters suspend either by closing lid, choosing suspend from the top-right drop down, or hitting the power button and pressing suspend. It doesn't matter. I then attempt to wake the computer either by opening the lid (if it was closed) or hitting the power button. Again, doesn't matter. The computer will then immediately shutdown about 50% of the time. It seems to be more likely to shut down the longer it has been on suspend. I took a snapshot of /var/log/pm-suspend.log after a successfully resume and a shutdown. The only difference (outside of timestamps of course) was that a successful resume, after reporting the success of various suspend hooks, writes: Thu Jul 5 21:36:45 PDT 2012: performing suspend Thu Jul 5 21:37:10 PDT 2012: Awake. Thu Jul 5 21:37:10 PDT 2012: Running hooks for resume and then reports successful resume hooks. When it shuts down, the log ends at "performing suspend". I diffed the two files so I know this is the only difference. Thus, it looks like it's not even trying to wake up. Would love some ideas on this one. I've scoured the web but can't seem to find anyone else running into the same issue (it seems more common that the computer shuts down upon entering suspend, or only on hitting the power button to wake, and haven't seen any that are random like mine). I'll update with any requested information.

    Read the article

  • SSRS Parameters and MDX Data Sets

    - by blakmk
    Having spent the past few weeks creating reports and dashboards in SSRS and SSAS 2008, I was frustrated by how difficult it is to use custom datasets to generate parameter drill downs. It also seems Reporting Services can be quite unforgiving when it comes to renaming things like datasets, so I want to share a couple of techniques that I found useful. One of the things I regularly do is to add parameters to the querys. However doing this causes Reporting Services to generate a hidden dataset and parameter name for you. One of the things I like to do is tweak these hidden datasets removing the ‘ALL’ level which is a tip I picked up from Devin Knight in his blog: There are some rules i’ve developed for myself since working with SSRS and MDX, they may not be the best or only way but they work for me. Rule 1 – Never trust the automatically generated hidden datasets Or even ANY, automatically generated MDX queries for that matter.... I’ve previously blogged about this here.   If you examine the MDX generated in the hidden dataset you will see that it generates the MDX in the context of the originiating query by building a subcube, this mean it may NOT be appropriate to use this in a subsequent query which has a different context. Make sure you always understand what is going on. Often when i’m developing a dashboard or a report there are several parameter oriented datasets that I like to manually create. It can be that I have different datasets using the same dimension but in a different context. One example of this, is that I often use a dataset for last month and a dataset for the last 6 months. Both use the same date hierarchy. However Reporting Services seems not to be too smart when it comes to generating unique datasets when working with and renaming parameters and datasets. Very often I have come across this error when it comes to refactoring parameter names and default datasets. "an item with the same key has already been added" The only way I’ve found to reliably avoid this is to obey to rule 2. Rule 2 – Follow this sequence when it comes to working with Parameters and DataSets: 1.    Create Lookup and Default Datasets in advance 2.    Create parameters (set the datasets for available and default values) 3.    Go into query and tick parameter check box 4.    On dataset properties screen, select the parameter defined earlier from the parameter value defined earlier. Rule 3 – Dont tear your hair out when you have just renamed objects and your report doesn’t build Just use XML notepad on the original report file. I found I gained a good understanding of the structure of the underlying XML document just by using XML notepad. From this you can do a search and find references of the missing object. You can also just do a wholesale search and replace (after taking a backup copy of course ;-) So I hope the above help to save the sanity of anyone who regularly works with SSRS and MDX.

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >