Search Results

Search found 15556 results on 623 pages for 'login controls'.

Page 512/623 | < Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >

  • Creating a voxel world with 3D arrays using threads

    - by Sean M.
    I am making a voxel game (a bit like Minecraft) in C++(11), and I've come across an issue with creating a world efficiently. In my program, I have a World class, which holds a 3D array of Region class pointers. When I initialize the world, I give it a width, height, and depth so it knows how large of a world to create. Each Region is split up into a 32x32x32 area of blocks, so as you may guess, it takes a while to initialize the world once the world gets to be above 8x4x8 Regions. In order to alleviate this issue, I thought that using threads to generate different levels of the world concurrently would make it go faster. Having not used threads much before this, and being still relatively new to C++, I'm not entirely sure how to go about implementing one thread per level (level being a xz plane with a height of 1), when there is a variable number of levels. I tried this: for(int i = 0; i < height; i++) { std::thread th(std::bind(&World::load, this, width, height, depth)); th.join(); } Where load() just loads all Regions at height "height". But that executes the threads one at a time (which makes sense, looking back), and that of course takes as long as generating all Regions in one loop. I then tried: std::thread t1(std::bind(&World::load, this, w, h1, h2 - 1, d)); std::thread t2(std::bind(&World::load, this, w, h2, h3 - 1, d)); std::thread t3(std::bind(&World::load, this, w, h3, h4 - 1, d)); std::thread t4(std::bind(&World::load, this, w, h4, h - 1, d)); t1.join(); t2.join(); t3.join(); t4.join(); This works in that the world loads about 3-3.5 times faster, but this forces the height to be a multiple of 4, and it also gives the same exact VAO object to every single Region, which need individual VAOs in order to render properly. The VAO of each Region is set in the constructor, so I'm assuming that somehow the VAO number is not thread safe or something (again, unfamiliar with threads). So basically, my question is two one-part: How to I implement a variable number of threads that all execute at the same time, and force the main thread to wait for them using join() without stopping the other threads? How do I make the VAO objects thread safe, so when a bunch of Regions are being created at the same time across multiple threads, they don't all get the exact same VAO? Turns out it has to do with GL contexts not working across multiple threads. I moved the VAO/VBO creation back to the main thread. Fixed! Here is the code for block.h/.cpp, region.h/.cpp, and CVBObject.h/.cpp which controls VBOs and VAOs, in case you need it. If you need to see anything else just ask. EDIT: Also, I'd prefer not to have answers that are like "you should have used boost". I'm trying to do this without boost to get used to threads before moving onto other libraries.

    Read the article

  • Legitimate use of the Windows "Documents" folder in programs.

    - by romkyns
    Anyone who likes their Documents folder to contain only things they place there knows that the standard Documents folder is completely unsuitable for this task. Every program seems to want to put its settings, data, or something equally irrelevant into the Documents folder, despite the fact that there are folders specifically for this job1. So that this doesn't sound empty, take my personal "Documents" folder as an example. I don't ever use it, in that I never, under any circumstances, save anything into this folder myself. And yet, it contains 46 folders and 3 files at the top level, for a total of 800 files in 500 folders. That's 190 MB of "documents" I didn't create. Obviously any actual documents would immediately get lost in this mess. My question is: can anything be done to improve the situation sufficiently to make "Documents" useful again, say over the next 5 years? Can programmers be somehow educated en-masse not to use it as a dumping ground? Could the OS start reporting some "fake" location hidden under AppData through the existing APIs, while only allowing Explorer and the various Open/Save dialogs to know where the "real" Documents folder resides? Or are any attempts completely futile or even unnecessary? 1For the record, here's a quick summary of the various standard directories that should be used instead of "Documents": RoamingAppData for user-specific data and settings. This is the directory to use for user-specific non-temporary data. Anything placed here will be available on any machine that a given user logs on to in networks where this is configured. Do not place large files here though, because they slow down login/logout in such environments. LocalAppData for user-and-machine-specific data and settings. This data differs for every user and every machine. This is also where very large user-specific data should be placed. ProgramData for machine-specific data and settings. These are the same regardless of which user is logged on, and will not roam to other machines in a network. GetTempPath for all files that may be wiped without loss of data when not in use. This is also the place for things like caches, because like temporary data, a cache does not need to be backed up. Place your huge cache here and you'll save your user some backup trouble. "Documents" itself should only ever be used if the user specified it manually by entering a path or selecting it in a Save dialog. That is the only time it is ever appropriate to save stuff in "Documents".

    Read the article

  • Combobox binding with different types

    - by George Evjen
    Binding to comboboxes in Silverlight has been an adventure the past couple of days. In our framework at ArchitectNow we use LookupGroups and LookupValues. In our database we would have a LookupGroup of NBA Teams for example. The group would be called NBATeams, we get the LookupGroupID and then get the values from the LookupValues table. So we would end up with a list of all 30+ teams. Our lookup values entity has a display text(string), value(string), IsActive and some other fields. With our applications we load all this information into the system when the user is logging in or right after they login. So in cache we have a list of groups and values that we can get at whenever we want to. We get this information in our framework simply by creating an observable collection of type LookupValue. To get a list of these values into our property all we have to do is. var NBATeams = AppContext.Current.LookupSerivce.GetLookupValues(“NBATeams”); Our combobox then is bound like this. (We use telerik components in most if not all our projects) <telerik:RadComboBox ItemsSource="{Binding NBATeams}”></telerik:RadComboBox> This should give you a list in your combobox. We also set up another property in our ViewModel that is a just single object of NBATeams  - “SelectedNBATeam” Our selectedItem in our combobox would look like, we would set this to a two way binding since we are sending data back. SelectedItem={Binding SelectedNBATeam, mode=TwoWay}” This is all pretty straight forward and we use this pattern throughout all our applications. What do you do though when you have a combobox in a ItemsControl or ListBox? Here we have a list of NBA Teams that are a string that are being brought back from the database. We cant have the selected item be our LookupValue because the data is a string and its being bound in an ItemsControl. In the example above we would just have the combobox in a form. Here though we have it in a ItemsControl, where there is no selected item from the initial ItemsSource. In order to get the selected item to be displayed in the combobox you have to convert the LookupValue to a string. Then instead of using SelectedItem in the combobox use SelectedValue. To convert the LookupValue we do this. Create an observable collection of strings public ObservableCollection<string> NBATeams { get; set;} Then convert your lookups to strings var NBATeams = new ObservableCollection<string>(AppContext.Current.LookupService.GetLookupValues(“NBATeams”).Select(x => x.DisplayText)); This will give us a list of strings and our selected value should be bound to the NBATeams property in our ItemsSource in our ItemsControl. SelectedValue={Binding NBATeam, mode=TwoWay}”

    Read the article

  • Issue 56 - Super Stylesheets Skinning in DotNetNuke 5

    May 2010 Welcome to Issue 56 of DNN Creative Magazine In this issue we show you how to use the powerful new Super Stylesheets skinning feature in DotNetNuke 5. Super Stylesheets are ideal for both beginner and experienced skin designers, they provide skin layouts using CSS. The advantage of Super Stylesheets is that you can easily create a skin layout which works in all browsers without the need to learn complex CSS techniques. They are also very quick to build and you can change a skin layout in a matter of minutes rather than hours. We show you how to build a skin from the very beginning using Super Stylesheets, we show you how to create various skin layouts, as well as multi-layouts. We also show you how to style the skin, how to add tokens such as the logo, menu, login links etc. and walk you through how to create a fully working skin from scratch. Following this we continue the Open Web Studio tutorials, this month we demonstrate how to create an installable DotNetNuke PA module using OWS. This is an essential technique which allows you to package up the OWS applications that you have created and build them into an installable zip package. The zip file is then installable as a standard DotNetNuke module which means you can easily install your OWS applications on other DotNetNuke installations by simply installing them as a standard DotNetNuke module. To finish, we have part six of the "How to Build a News Application with DotNetMushroom Rapid Application Developer (RAD)" article, where we demonstrate how to create a News Carousel using RAD, JQuery and the JCarousel plugin. This issue comes complete with 15 videos. Skinning: Super Stylesheets Skinning in DotNetNuke 5 - DNN Layouts (12 videos - 98mins) Module Development Series: How to Create an Installable DotNetNuke PA Module Using OWS (3 videos - 23mins) How to Implement a News Carousel Using DotNetMushroom RAD and JQuery View issue 56 to download all of the videos in one zip file DNN Creative Magazine for DotNetNuke Web Designers Covering DotNetNuke module video reviews, video tutorials, mp3 interviews, resources and web design tips for working with DotNetNuke. In 56 issues we have created 578 videos!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Still prompted for a password after adding SSH public key to a server

    - by Nathan Arthur
    I'm attempting to setup a git repository on my Dreamhost web server by following the "Setup: For the Impatient" instructions here. I'm having difficulty setting up public key access to the server. After successfully creating my public key, I ran the following command: cat ~/.ssh/[MY KEY].pub | ssh [USER]@[MACHINE] "mkdir ~/.ssh; cat >> ~/.ssh/authorized_keys" ...replacing the appropriate placeholders with the correct values. Everything seemed to go through fine. The server asked for my password, and, as far as I can tell, executed the command. There is indeed a ~/.ssh/authorized_keys file on the server. The problem: When I try to SSH into the server, it still asks for my password. My understanding is that it shouldn't be asking for my password anymore. What am I missing? EDIT: SSH -v Log: Macbook:~ michaeleckert$ ssh -v [USER]@[SERVER URL] OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 53: Applying options for * debug1: Connecting to [SERVER URL] [[SERVER IP]] port 22. debug1: Connection established. debug1: identity file /Users/michaeleckert/.ssh/id_rsa type -1 debug1: identity file /Users/michaeleckert/.ssh/id_rsa-cert type -1 debug1: identity file /Users/michaeleckert/.ssh/id_dsa type -1 debug1: identity file /Users/michaeleckert/.ssh/id_dsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.2 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.5p1 Debian-6+squeeze3 debug1: match: OpenSSH_5.5p1 Debian-6+squeeze3 pat OpenSSH_5* debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA [STRING OF NUMBERS AND LETTERS SEPARATED BY SEMI-COLONS] debug1: Host ‘[SERVER URL]' is known and matches the RSA host key. debug1: Found key in /Users/michaeleckert/.ssh/known_hosts:2 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /Users/michaeleckert/.ssh/id_rsa debug1: Trying private key: /Users/michaeleckert/.ssh/id_dsa debug1: Next authentication method: password [USER]@[SERVER URL]'s password: debug1: Authentication succeeded (password). Authenticated to [SERVER URL] ([[SERVER IP]]:22). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_US.UTF-8 Welcome to [SERVER URL] Any malicious and/or unauthorized activity is strictly forbidden. All activity may be logged by DreamHost Web Hosting. Last login: Sun Nov 3 12:04:21 2013 from [MY IP] [[SERVER NAME]]$

    Read the article

  • Using the @ in SQL Azure Connections

    - by BuckWoody
    The other day I was working with a client on an application they were changing to a hybrid architecture – some data on-premise and other data in SQL Azure and Windows Azure Blob storage. I had them make a couple of corrections - the first was that all communications to SQL Azure need to be encrypted. It’s a simple addition to the connection string, depending on the library you use. Which brought up another interesting point. They had been using something that looked like this, using the .NET provider: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=LoginName;Password=myPassword; Trusted_Connection=False;Encrypt=True; This includes most of the formatting needed for SQL Azure. It specifies TCP as the transport mechanism, the database name is included, Trusted_Connection is off, and encryption is on. But it needed one more change: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=[LoginName]@[serverName];Password=myPassword; Trusted_Connection=False;Encrypt=True; Notice the difference? It’s the User ID parameter. It includes the @ symbol and the name of the server – not the whole DNS name, just the server name itself. The developers were a bit surprised, since it had been working with the first format that just used the user name. Why did both work, and why is one better than the other? It has to do with the connection library you use. For most libraries, the user name is enough. But for some libraries (subject to change so I don’t list them here) the server name parameter isn’t sent in the way the load balancer understands, so you need to include the server name right in the login, so the system can parse it correctly. Keep in mind, the string limit for that is 128 characters – so take the @ symbol and the server name into consideration for user names. The user connection info is detailed here: http://msdn.microsoft.com/en-us/library/ee336268.aspx Upshot? Include the @servername on your connection string just to be safe. And plan for that extra space…  

    Read the article

  • What is a good design pattern / lib for iOS 5 to synchronize with a web service?

    - by Junto
    We are developing an iOS application that needs to synchronize with a remote server using web services. The existing web services have an "operations" style rather than REST (implemented in WCF but exposing JSON HTTP endpoints). We are unsure of how to structure the web services to best fit with iOS and would love some advice. We are also interested in how to manage the synchronization process within iOS. Without going into detailed specifics, the application allows the user to estimate repair costs at a remote site. These costs are broken down by room and item. If the user has an internet connection this data can be sent back to the server. Multiple photographs can be taken of each item, but they will be held in a separate queue, which sends when the connection is optimal (ideally wifi). Our backend application controls the unique ids for each room and item. Thus, each time we send these costs to the server, the server echoes the central database ids back, thus, that they can be synchronized in the mobile app. I have simplified this a little, since the operations contract is actually much larger, but I just want to illustrate the basic requirements without complicating matters. Firstly, the web service architecture: We currently have two operations: GetCosts and UpdateCosts. My assumption is that if we used a strict REST architecture we would need to break our single web service operations into multiple smaller services. This would make the services much more chatty and we would also have to guarantee a delivery order from the app. For example, we need to make sure that containing rooms are added before the item. Although this seems much more RESTful, our perception is that these extra calls are expensive connections (security checks, database calls, etc). Does the type of web api (operation over service focus) determine chunky vs chatty? Since this is mobile (3G), are we better handling lots of smaller messages, or a few large ones? Secondly, the iOS side. What is the current advice on how to manage data synchronization within the iOS (5) app itself. We need multiple queues and we need to guarantee delivery order in each queue (and technically, ordering between queues). The server needs to control unique ids and other properties and echo them back to the application. The application then needs to update an internal database and when re-updating, make sure the correct ids are available in the update message (essentially multiple inserts and updates in one call). Our backend has a ton of business logic operating on these cost estimates. We don't want any of this in the app itself. Currently the iOS app sends the cost data, and then the server echoes that data back with populated ids (and other data). The existing cost data is deleted and the echoed response data is added to the client database on the device. This is causing us problems, because any photos might not have been sent, but the original entity tree has been removed and replaced. Obviously updating the costs tree rather than replacing it would remove this problem, but I'm not sure if there are any nice xcode libraries out there to do such things. I welcome any advice you might have.

    Read the article

  • Please Help Me Install Firestorm?

    - by Elle Caszatt
    I have been trying for hours just to install one program. In this time, I've tried my best to follow directions and not screw everything up but I have. I'm new to Linux. I tried to install Firestorm and this is what happened: parent@ubuntu:~$ sudo '/home/parent/Downloads/Phoenix_Firestorm-Release_i686_4.2.1.29803/install.sh' [sudo] password for parent: Enter the desired installation directory [/opt/firestorm-install]: /home/parent/downloads - Installing to /home/parent/downloads /home/parent/Downloads/Phoenix_Firestorm-Release_i686_4.2.1.29803/install.sh: line 80: /home/parent/downloads/etc/refresh_desktop_app_entry.sh: Permission denied parent@ubuntu:~$ sudo opt/firestorm-install sudo: opt/firestorm-install: command not found parent@ubuntu:~$ ./etc/refresh_desktop_app_entry.sh bash: ./etc/refresh_desktop_app_entry.sh: No such file or directory parent@ubuntu:~$ sudo '/home/parent/Downloads/Phoenix_Firestorm-Release_i686_4.2.1.29803/install.sh' Enter the desired installation directory [/opt/firestorm-install]: /home/parent - Backing up previous installation to /home/parent.backup-2012-08-27 - Installing to /home/parent cp: cannot stat `/home/parent/Downloads/Phoenix_Firestorm-Release_i686_4.2.1.29803/*': No such file or directory Failed parent@ubuntu:~$ Now whenever I go into my files it says it can't find anything. Like Cannot find home/parent/Downloads. Now, I KNOW there are downloads. I don't know why it's doing this all of a sudden. I'm so frustrated that I'm ready to just go back to Windows. I've already had to uninstall/reinstall Ubuntu once today. It's looking like I"m going to have to do it again. How can I fix my file problem that I'm now having and can someone please, please tell me how to install Firestorm? I mean they don't even have their repository listed. It's ridiculous to have to go through this over a program. Spotify wasn't hard at all to install so why is this? Someone please help, and I'm sorry if I sound like a total idiot. I'm pretty tech savvy but I'm honestly pretty upset after struggling with this for hours. Edit Okay, I see the problem with the directory files (showing the error I mentioned above when I try to click on them). I can only access my downloads, desktop, ect, through the backup that was created when I tried to install Firestorm. It's like that's the real home now. How can I get it back to the way it was? Edit Ubuntu has stopped working for me on reboot now. It doesn't go past the login screen. This is exactly what happened when I had to uninstall it before after trying to install Firestorm. Maybe I'm giving up too easily but I think I'm just going to go back to Windows. If this is what's going to happen every time I innocently try to install a program then it's just not worth it. I installed it specifically to run Firestorm because Windows sucks up a lot of CPU and causes lag. I still appreciate any input but this is just too much hassle for something that shouldn't be hard.

    Read the article

  • Profiling Startup Of VS2012 &ndash; JustTrace Profiler

    - by Alois Kraus
    JustTrace is made by Telerik which is mainly known for its collection of UI controls. The current version (2012.3.1127.0) does include a performance and memory profiler which does cost 614€ and is currently with a special offer for 306€ on sale. It does include one year of free upgrades. The uneven € numbers are calculated from the 799€ and 50% dicsount price. The UI is already in Metro style and simple to use. Multi process, attach, method recording filter are not supported. It looks like JustTrace is like Ants a Just My Code profiler. For stuff where you do not have the pdbs or you want to dig deeper into the BCL code you will not get far. After getting the profile data you get in the All Methods grid a plain list with hit count and own time. The method list for all methods is also suspiciously short which is a clear sign that you will not get far during the analysis of foreign code. But at least there is also a memory profiler included. For this I have to choose in the first window for Profiling Type “Memory Profiler” to check the memory consumption of VS.  There are some interesting number to see but I do really miss from YourKit the thread stack window. How am I supposed to get a clue when much memory is allocated and the CPU consumption is high in which places I should look? The Snapshot summary gives a rough overview which is ok for a first impression. Next is Assemblies? This gives you a list of all loaded assemblies. Not terribly useful.   The By Type view gives you exactly what it is supposed to do. You have to keep in mind that this list is filtered by the types you did check in the Assemblies list. The By Type instance list does only show types from assemblies which do not originate from Microsoft. By default mscorlib and System are not checked. That is the reason why for the first time my By Type window looked like The idea behind this feature is to show only your instances because you are ultimately responsible for the overall memory consumption. I am not sure if I do like this feature because by default it does hide too much. I do want to see at least how many strings and arrays are allocated. A simple namespace filter would also do it in my opinion. Now you can examine all string instances and look who in the object graph does keep a reference on them. That is nice but YourKit has the big plus that you can also look into the string contents.  I am also not sure how in the graph cycles are visualized and what will happen if you have thousands of objects referencing you. That's pretty much it about JustTrace. It can help the average developer to pinpoint performance and memory issues by just looking at his own code and instances. Showing them more will not help them because the sheer amount of information will overwhelm them. And you need to have a pretty good understanding how the GC and the CLR does work. When you have a performance issue at a customer machine it is sometimes very helpful to be able a bring a profiler onto the machine (no pdbs, …) and to get a full snapshot of all processes which are in the problematic use case involved. For these more advanced use cased JustTrace is certainly the wrong tool. Next: SpeedTrace

    Read the article

  • Cannot install "ATI/AMD proprietary FGLRX graphics driver" (SystemError)

    - by Fisherman John
    I have just installed a fresh copy of 12.04.1 64bit. I formatted my PC completely and enabled updates during the installation. After the installation was complete, I went on updating my software. However, when I wanted to install the additional drivers using the Additional Drivers tool (namely "ATI/AMD proprietary FGLRX graphics driver"), it gave me this error: SystemError: E:Unable to correct problems, you have held broken packages. The same error shows up if I try installing the post-release updates driver. Installing the drivers from the terminal results in this output: XXXXXX:~$ sudo apt-get install fglrx [sudo] password for XXXXX: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: fglrx : Depends: lib32gcc1 but it is not going to be installed Depends: libc6-i386 but it is not going to be installed E: Unable to correct problems, you have held broken packages." I get this using "sudo apt-get update": http://pastebin.com/AWAtDXjY But "sudo apt-get install fglrx" still get me this error: http://pastebin.com/RYM55bVN & "sudo apt-get -f install fglrx" gives me this error: http://pastebin.com/xxekajvP Any help would be greatly appreciated. (Please note that I'm new to Linux, coming directly from Windows. I have tried Ubuntu twice or so before, but it was not for a long period of time. The drivers got installed smoothly the few times I've tried Ubuntu, but post-release updates never worked for me.) [I am going to work now, so I can only answer from my phone. Can't really test any new solutions you may give me until ~10 hours from now on. Maybe more.] @stonedsquirrel When I try to run that command, I get this error: "XXXXXX:~$ sudo apt-get install fglrx [sudo] password for XXXXX: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: fglrx : Depends: lib32gcc1 but it is not going to be installed Depends: libc6-i386 but it is not going to be installed E: Unable to correct problems, you have held broken packages." ie. I get the same error. ( I am Fisherman John, dunno how to login & thereby respond to your comment again _ )

    Read the article

  • Capture a Query Executed By An Application Or User Against a SQL Server Database in Less Than a Minute

    - by Compudicted
    At times a Database Administrator, or even a developer is required to wear a spy’s hat. This necessity oftentimes is dictated by a need to take a glimpse into a black-box application for reasons varying from a performance issue to an unauthorized access to data or resources, or as in my most recent case, a closed source custom application that was abandoned by a deserted contractor without source code. It may not be news or unknown to most IT people that SQL Server has always provided means of back-door access to everything connecting to its database. This indispensible tool is SQL Server Profiler. This “gem” is always quietly sitting in the Start – Programs – SQL Server <product version> – Performance Tools folder (yes, it is for performance analysis mostly, but not limited to) ready to help you! So, to the action, let’s start it up. Once ready click on the File – New Trace button, or using Ctrl-N with your keyboard. The standard connection dialog you have seen in SSMS comes up where you connect the standard way: One side note here, you will be able to connect only if your account belongs to the sysadmin or alter trace fixed server role. Upon a successful connection you must be able to see this initial dialog: At this stage I will give a hint: you will have a wide variety of predefined templates: But to shorten your time to results you would need to opt for using the TSQL_Grouped template. Now you need to set it up. In some cases, you will know the principal’s login name (account) that needs to be monitored in advance, and in some (like in mine), you will not. But it is VERY helpful to monitor just a particular account to minimize the amount of results returned. So if you know it you can already go to the Event Section tab, then click the Column Filters button which would bring a dialog below where you key in the account being monitored without any mask (or whildcard):  If you do not know the principal name then you will need to poke around and look around for things like a config file where (typically!) the connection string is fully exposed. That was the case in my situation, an application had an app.config (XML) file with the connection string in it not encrypted: This made my endeavor very easy. So after I entered the account to monitor I clicked on Run button and also started my black-box application. Voilà, in a under a minute of time I had the SQL statement captured:

    Read the article

  • Sampling SQL server batch activity

    - by extended_events
    Recently I was troubleshooting a performance issue on an internal tracking workload and needed to collect some very low level events over a period of 3-4 hours.  During analysis of the data I found that a common pattern I was using was to find a batch with a duration that was longer than average and follow all the events it produced.  This pattern got me thinking that I was discarding a substantial amount of event data that had been collected, and that it would be great to be able to reduce the collection overhead on the server if I could still get all activity from some batches. In the past I’ve used a sampling technique based on the counter predicate to build a baseline of overall activity (see Mikes post here).  This isn’t exactly what I want though as there would certainly be events from a particular batch that wouldn’t pass the predicate.  What I need is a way to identify streams of work and select say one in ten of them to watch, and sql server provides just such a mechanism: session_id.  Session_id is a server assigned integer that is bound to a connection at login and lasts until logout.  So by combining the session_id predicate source and the divides_by_uint64 predicate comparator we can limit collection, and still get all the events in batches for investigation. CREATE EVENT SESSION session_10_percent ON SERVER ADD EVENT sqlserver.sql_statement_starting(     WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlos.wait_info (        WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlos.wait_info_external (        WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlserver.sql_statement_completed(     WHERE (package0.divides_by_uint64(sqlserver.session_id,10))) ADD TARGET ring_buffer WITH (MAX_DISPATCH_LATENCY=30 SECONDS,TRACK_CAUSALITY=ON) GO   There we go; event collection is reduced while still providing enough information to find the root of the problem.  By the way the performance issue turned out to be an IO issue, and the session definition above was more than enough to show long waits on PAGEIOLATCH*.        

    Read the article

  • Consumer Oriented Search In Oracle Endeca Information Discovery - Part 2

    - by Bob Zurek
    As discussed in my last blog posting on this topic, Information Discovery, a core capability of the Oracle Endeca Information Discovery solution enables businesses to search, discover and navigate through a wide variety of big data including structured, unstructured and semi-structured data. With search as a core advanced capabilities of our product it is important to understand some of the key differences and capabilities in the underlying data store of Oracle Endeca Information Discovery and that is our Endeca Server. In the last post on this subject, we talked about Exploratory Search capabilities along with support for cascading relevance. Additional search capabilities in the Endeca Server, which differentiate from simple keyword based "search boxes" in other Information Discovery products also include: The Endeca Server Supports Set Search.  The Endeca Server is organized around set retrieval, which means that it looks at groups of results (all the documents that match a search), as well as the relationship of each individual result to the set. Other approaches only compute the relevance of a document by comparing the document to the search query – not by comparing the document to all the others. For example, a search for “U.S.” in another approach might match to the title of a document and get a high ranking. But what if it were a collection of government documents in which “U.S.” appeared in many titles, making that clue less meaningful? A set analysis would reveal this and be used to adjust relevance accordingly. The Endeca Server Supports Second-Order Relvance. Unlike simple search interfaces in traditional BI tools, which provide limited relevance ranking, such as a list of results based on key word matching, Endeca enables users to determine the most salient terms to divide up the result. Determining this second-order relevance is the key to providing effective guidance. Support for Queries and Filters. Search is the most common query type, but hardly complete, and users need to express a wide range of queries. Oracle Endeca Information Discovery also includes navigation, interactive visualizations, analytics, range filters, geospatial filters, and other query types that are more commonly associated with BI tools. Unlike other approaches, these queries operate across structured, semi-structured and unstructured content stored in the Endeca Server. Furthermore, this set is easily extensible because the core engine allows for pluggable features to be added. Like a search engine, queries are answered with a results list, ranked to put the most likely matches first. Unlike “black box” relevance solutions, which generalize one strategy for everyone, we believe that optimal relevance strategies vary across domains. Therefore, it provides line-of-business owners with a set of relevance modules that let them tune the best results based on their content. The Endeca Server query result sets are summarized, which gives users guidance on how to refine and explore further. Summaries include Guided Navigation® (a form of faceted search), maps, charts, graphs, tag clouds, concept clusters, and clarification dialogs. Users don’t explicitly ask for these summaries; Oracle Endeca Information Discovery analytic applications provide the right ones, based on configurable controls and rules. For example, the analytic application might guide a procurement agent filtering for in-stock parts by visualizing the results on a map and calculating their average fulfillment time. Furthermore, the user can interact with summaries and filters without resorting to writing complex SQL queries. The user can simply just click to add filters. Within Oracle Endeca Information Discovery, all parts of the summaries are clickable and searchable. We are living in a search driven society where business users really seem to enjoy entering information into a search box. We do this everyday as consumers and therefore, we have gotten used to looking for that box. However, the key to getting the right results is to guide that user in a way that provides additional Discovery, beyond what they may have anticipated. This is why these important and advanced features of search inside the Endeca Server have been so important. They have helped to guide our great customers to success. 

    Read the article

  • List of common pages to have in the footer [closed]

    - by user359650
    I would like to post this question as a reference for webmasters wondering what pages they should include in the footer. I will use answers to complete my initial list: About us / About MyCompany / MyCompany About / About us: description about the company, its mission, and its vision. History: summary of milestones achieved by the company. The team / Management / Board of directors: depending on size of the company there may be one of more pages describing the people involved in the company, depending on their position. Awards: list of awards received by the company if any. In the press / They're talking about us: list of links to external websites, usually highly regarded news websites, which mentioned the company in one of their articles. Media Wallpapers: wallpapers with company logo in different colors and formats that fans can set as desktop image for their computer. logos: company logo in different colors and formats that websites/blogs posting about the website can use for illustration purposes. Media kits: documents, usually in PDF format summarizing the key company figures and facts that journalists can download and read to get a quick overview of the company. Misc Contact / Contact us: contact details the company is prepared to disclose if any (address, email, phone) or contact form. Careers / Jobs / Join us: list of open vacancies with contact form to apply. Investors / Partners / Publishers: information and contact forms for companies willing to become Investors/Partners/Publishers or login page to access portal restricted to those who already are. FAQ: list of common questions and answers to guide users and reduce number of support requests. Follow us / Community Facebook / Twitter / Google+: links to the company's pages/accounts on various social networks. Legal Terms / Terms of use / Terms & Conditions: rules users must follow when browsing the website. Privacy / Privacy Statement: explanations as to how the company deals with users' personal data and what users can do about it (request information to be deleted...). cookies: page that starts appearing on more and more websites due to new regulation (notably EU) imposing more transparency and control for users about cookies (e.g. BBC cookie page). Any input is welcome PS: if someone with enough rep could add the footer tag that would be great (min. 300 required).

    Read the article

  • PASS: FY10 Actuals Posted

    - by Bill Graziano
    Earlier this year we published preliminary fiscal year 2010 financials to the Governance page on the PASS web site.  Please remember that FY10 runs from July 1st, 2009 through June 30th, 2010 and includes the November 2009 Summit.  We do our fiscal year this way so that the Summit falls earlier in the fiscal year.  The financials we had posted were P&L numbers at the portfolio level.  Prior to this we had posted our detailed budget but only posted the auditors report at the end of each year.  Today we updated our published financials to include: Pre-audit actuals from FY10 at the same level as our budget.  The document has both actuals and budget for FY10 side by side.  This is over 20 pages of detailed financial information covering hundreds of line-items. A letter describing key differences between our budget and actuals.  I walked through each line item where the difference was greater than $25,000 and explained what happened and why. We updated the financial graph going back to 2003 to include FY10. This update should “close the loop” on our financials.  You can now start with the published budget and compare it to the finished financials at the same level of detail.  We also plan to publish the auditor’s report when that is completed -- as we do every year. Overall I’m very happy with how FY10 turned out.  Keep in mind that this was the November 2009 Summit so we were still facing economic challenges.  With all that we were roughly break-even showing a $15,000 profit on $3.9 million of revenue.  I didn’t find anything shocking in reviewing our actual vs. budget but there were a few things that needed explanation.  You can see those in the letter on the governance page. Please keep in mind that these are the actuals from our operating financials.  The auditor may have us make adjustments for depreciation or other financial transactions.  We may also account for certain transactions differently for tax purposes than we do for financial reporting purposes.  I feel these financial statements give you the clearest picture of how our organization spends its money. We were late publishing these this year.  We were working through some tax issues and that delayed our ability to file our final tax forms which delayed this process.  In hindsight I should have published these documents as soon as we had them and not waited for the tax issues.  We’ll do this better in the future. And on a final note, you don’t need to login to view these documents.  If you have any questions you can post them here.  If we get more than a few questions we may see about creating some forums for financial issues on the PASS web site.

    Read the article

  • Integrating with a payment provider; Proper and robust OOP approach

    - by ExternalUse
    History We are currently using a so called redirect model for our online payments (where you send the payer to a payment gateway, where he inputs his payment details - the gateway will then return him to a success/failure callback page). That's easy and straight-forward, but unfortunately quite inconvenient and at times confusing for our customers (leaving the site, changing their credit card details with an additional login on another site etc). Intention & Problem description We are now intending to switch to an integrated approach using an exchange of XML requests and responses. My problem is on how to cater with all (or rather most) of the things that may happen during processing - bearing in mind that normally simplicity is robust whereas complexity is fragile. Examples User abort: The user inputs Credit Card details and hits submit. An XML message to the provider's gateway is sent and waiting for response. The user hits "stop" in his browser or closes the window. ignore_user_abort() in PHP may be an option - but is that reliable? might it be better to redirect the user to a "please wait"-page, that in turn opens an AJAX or other request to the actual processor that does not rely on the connection? Database goes away sounds over-complicated, but with e.g. a webserver in the States and a DB in the UK, it has happened and will happen again: User clicks together his order, payment request has been sent to the provider but the response cannot be stored in the database. What approach could I use, using PHP to sort of start an SQL like "Transaction" that only at the very end gets committed or rolled back, depending on the individual steps? Should then neither commit or roll back have happened, I could sort of "lock" the user to prevent him from paying again or to improperly account for payments - but how? And what else do I need to consider technically? None of the integration examples of e.g. Worldpay, Realex or SagePay offer any insight, and neither Google or my search terms were good enough to find somebody else's thoughts on this. Thank you very much for any insight on how you would approach this!

    Read the article

  • Cloud Fact for Business Managers #3: Where You Data Is, and Who Has Access to It Might Surprise You

    - by yaldahhakim
    Written by: David Krauss While data security and operational risk conversations usually happen around the desk of a CCO/CSO (chief compliance and/or security officer), or perhaps the CFO, since business managers are now selecting cloud providers, they need to be able to at least ask some high-level questions on the topic of risk and compliance.  While the report found that 76% of adopters were motivated to adopt cloud apps because of quick access to software, most of these managers found that after they made a purchase decision their access to exciting new capabilities in the cloud could be hindered due to performance and scalability constraints put forth  by their cloud provider.  If you are going to let your business consume their mission critical business applications as a service, then it’s important to understand who is providing those cloud services and what kind of performance you are going to get.  Different types of departments, companies and industries will all have unique requirements so it’s key to take this also into consideration.   Nothing puts a CEO in a bad mood like a public data breach or finding out the company lost money when customers couldn’t buy a product or service because your cloud service provider had a problem.  With 42% of business managers having seen a data security breach in their department associated directly with the use of cloud applications, this is happening more than you think.   We’ve talked about the importance of being able to avoid information silos through a unified cloud approach and platform.  This is also important when keeping your data safe and secure, and a key conversation to have with your cloud provider.  Your customers want to know that their information is protected when they do business with you, just like you want your own company information protected.   This is really hard to do when each line of business is running different cloud application services managed by different cloud providers, all with different processes and controls.   It only adds to the complexity, and the more complex, the more risky and the chance that something will go wrong. What about compliance? Depending on the cloud provider, it can be difficult at best to understand who has access to your data, and were your data is actually stored.  Add to this multiple cloud providers spanning multiple departments and it becomes very problematic when trying to comply with certain industry and country data security regulations.  With 73% of business managers complaining that having cloud data handled externally by one or more cloud vendors makes it hard for their department to be compliant, this is a big time suck for executives and it puts the organization at risk. Is There A Complete, Integrated, Modern Cloud Out there for Business Executives?If you are a business manager looking to drive faster innovation for your business and want a cloud application that your CIO would approve of, I would encourage you take a look at Oracle Cloud.  It’s everything you want from a SaaS based application, but without compromising on functionality and other modern capabilities like embedded business intelligence, social relationship management (for your entire business), and advanced mobile.  And because Oracle Cloud is built and managed by Oracle, you can be confident that your cloud application services are enterprise-grade.  Over 25 Million users and 10 thousands companies around the globe rely on Oracle Cloud application services everyday – maybe your business should too.  For more information, visit cloud.oracle.com. Additional Resources •    Try it: cloud.oracle.com•    Learn more: http://www.oracle.com/us/corporate/features/complete-cloud/index.html•    Research Report: Cloud for Business Managers: The Good, the Bad, and the Ugly

    Read the article

  • EPPM Is a Must-Have Capability as Global Energy and Power Industries Eye US$38 Trillion in New Investments

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} “The process manufacturing industry is facing an unprecedented challenge: from now until 2035, cumulative worldwide investments of US$38 trillion will be required for drilling, power generation, and other energy projects,” Iain Graham, director of energy and process manufacturing for Oracle’s Primavera, said in a recent webcast. He adds that process manufacturing organizations such as oil and gas, utilities, and chemicals must manage this level of investment in an environment of constrained capital markets, erratic supply and demand, aging infrastructure, heightened regulations, and declining global skills. In the following interview, Graham explains how the right enterprise project portfolio management (EPPM) technology can help the industry meet these imperatives. Q: Why is EPPM so important for today’s process manufacturers? A: If the industry invests US$38 trillion without proper cost controls in place, a huge amount of resources will be put at risk, especially when it comes to cost overruns that may occur in large capital projects. Process manufacturing companies must not only control costs, but also monitor all the various contractors that will be involved in each project. If you’re not managing your own workers and all the interdependencies among the different contractors, then you’ve got problems. Q: What else should process manufacturers look for? A: It’s also important that an EPPM solution has the ability to manage more than just capital projects. For example, it’s best to manage maintenance and capital projects in the same system. Say you’re due to install a new transformer in a power station as part of a capital project, but routine maintenance in that area of the facility is scheduled for that morning. The lack of coordination could lead to unforeseen delays. There are also IT considerations that impact capital projects, such as adding servers and network cable for a control system in a power station. What organizations need is a true EPPM system that’s not just for capital projects, maintenance, or IT activities, but instead an enterprisewide solution that provides visibility into all types of projects. Read the complete Q&A here and discover the practical framework for successfully managing this massive capital spending.

    Read the article

  • 12.04 Booting into Terminal

    - by user170796
    To preface this, I would like to say that I am completely new to Ubuntu and have essentially zero programming experience/experience working with command line and terminal. I installed Ubuntu because I would like to get into programming. If you could provide me with the simplest instructions possible, I would be grateful. I have a Lenovo Ideapad Y500 (Intel i7, NVidia GT 750m, 1TB HDD, 16GB SSD cache, 8GB RAM) with Windows 8 on it. Using a Live CD, I installed Ubuntu 12.04 onto a 75 GB partition. During the installation, I kept all default settings except for one thing; I decided to encrypt my home folder, and so checked the corresponding box. The installation completed, and I restarted. Once I restarted, I saw the options "Ubuntu, with Linux 3.2.0-23-generic" "Ubuntu, with Linux 3.2.0-23-generic (recovery mode)" "Memory test (memtest86+)" "Memory test (memtest86+, serial console 115200)" "Windows Recovery Environment (loader) (on /dev/sdb3)" "Windows 8 (loader) (on /dev/sdb5)" "System Setup" I chose the first option, and was directed to a screen with the Ubuntu logo and the row of five dots below that change from orange to white. Then, I was brought to a full screen terminal that prompted me to login, which I did. I saw no option to boot into GUI at all, and am lost. I've been searching around and have tried the "startx" command to no avail. Should the command have some sort of context or something? I've also tried selecting the recovery mode option from the boot manager. I've tried the resume option from the following menu, which eventually just shuts down the computer after displaying a lot of scrolling text that's too fast for me to read. I've also tried the failsafex mode from the recovery mode menu, which only brings up a terminal box at the bottom of the window that covers the entire bottom part of the screen. Commands won't work in this window. When I try to access Windows 8, I get a message saying that the EFI file path was not specified or something along those lines. I had to enable Secure Boot in order to access Windows 8 (I had disabled it to be able to boot from the Live CD), which is functioning normally. I am at a complete loss for what to do. Any help will be extremely appreciated. EDIT: Bonus question! If you could figure out a way for me to boot to Windows 8 without having to enable Secure Boot, it would save me a lot of trouble. I can deal with switching every time, but I'd rather not have to.

    Read the article

  • A better way to organize your Silverlight Code Snippets.

    - by mbcrump
    I hate re-writing code. I also hate it when I find a great code snippet on the web and forget to bookmark it or it gets lost in my endless sea of bookmarks. So what do you do to get around this? This is the question that I was asking myself at the end of 2010. How can I get my Silverlight code organized? My requirements for a snippet manager were: Needs to be FREE. An easy way to view XAML/C# code behind together in one “view”. I wanted the ability to store the code snippets in cloud in case my HDD dies. Searchable Keywords to quickly find code snippets. I started looking for a snippet manager that would allow me to do just that and finally found Snippet Manager. Before going any further, I think that one of the most important things to note here is that this software supports 37 languages. It’s not just for Silverlight developers nor C# only guys. The software supports Java, SQL and even COBOL.   Below is a screenshot of the Snippet Manager that shows my Silverlight code snippet. You will notice that I have highlighted two sections. The top part is my XAML and the bottom is my C# code behind. I’ve included a sample below of my code snippets so that you can get an idea of how I organized it. Another thing that’s great about this software is that it supports plain text. I added some connection strings in the TEXT section below.  Once you have finished adding your code snippets, you can store them in the cloud. I created a FTP directory called “snippets” on my FTP Server and hit the upload button once I am finished adding my new codes snippets. This will allow me to use the code snippets on another computer with this application on my USB Key. See screenshots below: Enter your FTP credentials below: Hit the Uploads button on the Toolbar: Login in to your FTP Server and verify the following files are now on the FTP Server: Another great feature of the Snippet Manager is that you can also integrate this into VS2010 by clicking Tools –> External Tools: And setting up your External Screen to point to the Executable: You can now launch it by going to Tools –> Snippet Manager. If you want you could also a shortcut to launch the program with HotKeys. As you can see, this is a nice little program that includes everything needed to organize your code snippets very clean. I didn’t go over every feature but this is something that you might want to download and give it a shot.  Subscribe to my feed CodeProject

    Read the article

  • Getting current time in milliseconds

    - by user90293423
    How to get the current time in milliseconds? I'm working on a hacking simulation game and when ever someone connects to another computer/NPC, a login screen popups with a button on the side called BruteForce. When BruteForce is clicked, what i want the program to do is, calculate how many seconds cracking the password is going to take based on the player's CPU speed but that's the easy part. The hard part is i want to enter a character in the password's box every X milliseconds based on a TimeToCrack divided by PasswordLength formula. But since i don't know how to find how many milliseconds have elapsed since the second has passed, the program waits until the CurrentTime is higher than the TimeBeforeTheLoopStarted + HowLongItTakesToTypeaCharacter which is always going to be a second. How would you handle my problems? I've commented the game breaking part. std::vector<QString> hardware = user.getHardware(); QString CPU = hardware[0]; unsigned short Speed = 0; if(CPU == "OMG SingleCore 1.8GHZ"){ Speed = 2; } const short passwordLength = password.length(); /* It's equal to 16 */ int Time = passwordLength / Speed; double TypeSpeed = Time / passwordLength; time_t t = time(0); struct tm * now = localtime(&t); unsigned short EndTime = (now->tm_sec + Time) % 60; unsigned short CurrentTime = 0; short i = passwordLength - 1; do{ t = time(0); now = localtime(&t); CurrentTime = now->tm_sec; do{ t = time(0); now = localtime(&t); }while(now->tm_sec < CurrentTime + TypeSpeed); /* Highly flawed */ /* Do this while your integer value is under this double value */ QString tempPass = password; tempPass.chop(i); ui->lineEdit_2->setText(tempPass); i--; }while(CurrentTime != EndTime);

    Read the article

  • Blank desktop after updates today, only unity2d works now [closed]

    - by NewUbuntuUser
    Possible Duplicate: Unity doesn't load, no Launcher, no Dash appears I have been using 12.04 (wubi) since a week now and this is my 1st exposure to linux. Everything was going on fine till now , but today as soon as I updated through the update manager , docky gave a message that compositing is required or something, and i got an error window which asked me to report the error and then the update manager asked me to reboot. After rebooting , i just get a blank desktop screen , no launcher, no toolbar. I have to restart with the power key. However if i login through Unity 2d everything is fine except the benefits of the 3d environment. I guess something got messed up after the update and i cant figure out which program or file caused this mess. I would highly appreciate if someone could help me out with this as I really liked working on ubuntu after windows 7. Thanks! SOLVED-- Thanks @jrg for the link provided.. it helped me to come out of this mess. Actually some update of compizconfig made the unity plugin inactive did something to the Animation add ons , the one which you use for the burn effect etc. What i did was : On the blank desktop Pressed keys Ctrl + Alt + T to bring up the terminal and typed ccsm This brought up the compiz config system manager, there i enabled the unity plugin and rebooted. Everything started working fine. But then again when i started the addons plugin ,everything went back to square one.. :( . This time pressing Ctrl + Alt + T also did not help. Then i tried Ctrl + Alt + F1 , it brought up a terminal or whatever u call it, and then i typed unity --reset , it did some resetting and in the end it showed some comositing done , I pressed Ctrl + Alt + F7 to come back to the desktop , and here it was , my old sweet desktop.. it had the animation effects too like wobbly windows, excepting the addons like the burn effect, i guess something got wrong with the last update of that addon plugin and now whenever i try turning that on , everything goes poof!! 6 hours wasted , but i guess i learnt something new.. not bad for an orthodontist i guess :))

    Read the article

  • ZFS Storage Appliance ? ldap ??????

    - by user13138569
    ZFS Storage Appliance ? Openldap ????????? ???ldap ?????????????? Solaris 11 ? Openldap ????????????? ??? slapd.conf ??ldif ?????????? user01 ??????? ?????? slapd.conf # # See slapd.conf(5) for details on configuration options. # This file should NOT be world readable. # include /etc/openldap/schema/core.schema include /etc/openldap/schema/cosine.schema include /etc/openldap/schema/nis.schema # Define global ACLs to disable default read access. # Do not enable referrals until AFTER you have a working directory # service AND an understanding of referrals. #referral ldap://root.openldap.org pidfile /var/openldap/run/slapd.pid argsfile /var/openldap/run/slapd.args # Load dynamic backend modules: modulepath /usr/lib/openldap moduleload back_bdb.la # moduleload back_hdb.la # moduleload back_ldap.la # Sample security restrictions # Require integrity protection (prevent hijacking) # Require 112-bit (3DES or better) encryption for updates # Require 63-bit encryption for simple bind # security ssf=1 update_ssf=112 simple_bind=64 # Sample access control policy: # Root DSE: allow anyone to read it # Subschema (sub)entry DSE: allow anyone to read it # Other DSEs: # Allow self write access # Allow authenticated users read access # Allow anonymous users to authenticate # Directives needed to implement policy: # access to dn.base="" by * read # access to dn.base="cn=Subschema" by * read # access to * # by self write # by users read # by anonymous auth # # if no access controls are present, the default policy # allows anyone and everyone to read anything but restricts # updates to rootdn. (e.g., "access to * by * read") # # rootdn can always read and write EVERYTHING! ####################################################################### # BDB database definitions ####################################################################### database bdb suffix "dc=oracle,dc=com" rootdn "cn=Manager,dc=oracle,dc=com" # Cleartext passwords, especially for the rootdn, should # be avoid. See slappasswd(8) and slapd.conf(5) for details. # Use of strong authentication encouraged. rootpw secret # The database directory MUST exist prior to running slapd AND # should only be accessible by the slapd and slap tools. # Mode 700 recommended. directory /var/openldap/openldap-data # Indices to maintain index objectClass eq ?????????ldif???? dn: dc=oracle,dc=com objectClass: dcObject objectClass: organization dc: oracle o: oracle dn: cn=Manager,dc=oracle,dc=com objectClass: organizationalRole cn: Manager dn: ou=People,dc=oracle,dc=com objectClass: organizationalUnit ou: People dn: ou=Group,dc=oracle,dc=com objectClass: organizationalUnit ou: Group dn: uid=user01,ou=People,dc=oracle,dc=com uid: user01 objectClass: top objectClass: account objectClass: posixAccount objectClass: shadowAccount cn: user01 uidNumber: 10001 gidNumber: 10000 homeDirectory: /home/user01 userPassword: secret loginShell: /bin/bash shadowLastChange: 10000 shadowMin: 0 shadowMax: 99999 shadowWarning: 14 shadowInactive: 99999 shadowExpire: -1 ldap?????????????ZFS Storage Appliance??????? Configuration SERVICES LDAP ??Base search DN ?ldap??????????? ???? ldap ????????? user01 ???????????????? ???????????? user ????????? Unknown or invalid user ?????????????????? ????????????????Solaris 11 ???????????? ????????????? ldap ????????getent ??????????????? # svcadm enable svc:/network/nis/domain:default # svcadm enable ldap/client # ldapclient manual -a authenticationMethod=none -a defaultSearchBase=dc=oracle,dc=com -a defaultServerList=192.168.56.201 System successfully configured # getent passwd user01 user01:x:10001:10000::/home/user01:/bin/bash ????????? user01 ?????????????? # mount -F nfs -o vers=3 192.168.56.101:/export/user01 /mnt # su user01 bash-4.1$ cd /mnt bash-4.1$ touch aaa bash-4.1$ ls -l total 1 -rw-r--r-- 1 user01 10000 0 May 31 04:32 aaa ?????? ldap ??????????????????????????!

    Read the article

  • Smooth animation on QStackedWidget

    - by saravanan
    Hi., i have five Qwidgets (Each QWidget having different controls). i put all the QWidget into one Parent QStackedWidget. for change the display of Qwidget i am using setCurrentIndex(int) function. There is no problem in displaying. but i need to put animation while changing the page. i tried nothing is working. so i removed the QStackedWidget and i put QWidget directly and i tried with QPropertyAnimation. This QPropertyAnimation is working but it's not the smooth animation. Here my code for QPropertyAnimation. QRect pGeo(8,152,width()-16,height()-160); profilePage->show(); //first QWidget QPropertyAnimation *anim1= new QPropertyAnimation(profilePage, "geometry"); anim1->setStartValue(QRect(200,pGeo.y(),pGeo.width(),pGeo.height())); anim1->setEndValue(pGeo); anim1->setEasingCurve(QEasingCurve::InOutSine); anim1->setDuration(500); anim1->start(); how to do the smooth animation using the QWidget or the QStackedWidget. Please give some suggestion to implement the smooth animation.

    Read the article

  • Windows 7 UAC manifest file for some VB6 application

    - by Daniel
    Hi, all, I have an old VB6 application which should run on Windows 7 (with UAV set to the default level, 3 of 4 IMHO). It has the functionality to update itself, and Windows 7 is now complaining that it would modify the computer (At least windows 7 is right here). I was able to run it in Vista with some kind of manifest file, but this does not seem to work anymore (which is the intended behaviour if I think of it). The manifest file is this: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <assemblyIdentity version="1.1.0.24" processorArchitecture="X86" name="IKOfficeAppStarter" type="win32"/> <description>IKOffice Starter</description> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.Windows.Common-Controls" version="6.0.0.0" processorArchitecture="X86" publicKeyToken="6595b64144ccf1df" language="*"/> </dependentAssembly> </dependency> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v2"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="true"/> </requestedPrivileges> </security> </trustInfo> </assembly> The manifest can be found near the exe "IKOffice Starter.exe" and is called "IKOffice Starter.exe.manifest", which should be okey. Currently the Shield Icon has gone from my .exe, but when try to start the software, i get the message "Der angeforderte Vorgang erfordert höhere Rechte", or translated to english "the requested operation requires elevation". What can I do to stop windows to bug me anymore, so I can install this application on our clients computers. Hey, I already told Windows to run it as Invoker, so why is it still complaining?

    Read the article

< Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >