Search Results

Search found 17470 results on 699 pages for 'single quote'.

Page 323/699 | < Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >

  • General directions on developing a server side control system for JS/Canvas Action RPG

    - by Billy Ninja
    Well, yesterday I asked on anti-cheat JS, and confirmed what I kind of already knew that it's just not possible. Now I wanna measure roughly how hard it is to implement a server side checking that is agnostic to client input, that does not mess with the game experience so much. I don't wanna waste to much resource on this matter, since it's going to be initially a single player game, that I may or would like to introduce some kind of ranking, trading system later on. I'd rather deliver better more cool game features instead. I don't wanna have to guarantee super fast server response to keep the game going lag free. I'd rather go with more loose discrete control of key variables and instances. Like store user's action on a fifo buffer on the client, and push that actions to the server gradually. I'd love to see a elegant, generic solution that I could plug into my client game logic root (not having to scatter treatments everywhere in my client js) - and have few classes on Node.js server that could handle that - without having to mirror/describe all of my game entities a second time on the server.

    Read the article

  • Getting More Out of UPK

    - by [email protected]
    Are you getting the most out of UPK? Remember the idea of streamlining your content creation efforts? How about the concept of collaboration during development? How are you leveraging the System Process Documents or Test Scripts? Is your training team benefiting from the creation of process documentation? Is UPK linked into the help menu of your application or your even at the browser level (Smart Help)? Many customers underutilize UPK. Some customers just think of UPK as a training creation solution or just for creating documentation. To get the full value of UPK you need to first evaluate how the UPK developer is installed. Single User or Multi User? If you have more than two developers of UPK, then there is a significant benefit from installing UPK in multi user mode. This helps drive collaboration, automatic version control and better facilitation of the workflow and state features with use of customized views for the developers. Has your organization installed Usage Tracking? How are the outputs deployed and for how many applications? If these questions have you thinking about your overall usage of UPK and you see significant improvement by using more of what UPK has to offer, then it could be time for a UPK Health Check. Contact your UPK Sales Consultant to help understand your environment and how to maximize the value of UPK and start getting more out of the product.

    Read the article

  • SQL SERVER – 2011 – SEQUENCE is not IDENTITY

    - by pinaldave
    Yesterday I posted blog post on the subject SQL SERVER – 2011 – Introduction to SEQUENCE – Simple Example of SEQUENCE and I received comment where user was not clear about difference between SEQUENCE and IDENTITY. The reality is that SEQUENCE not like IDENTITY. There is very clear difference between them. Identity is about single column. Sequence is always incrementing and it is not dependent on any table. Here is the quick example of the same. USE AdventureWorks2008R2 GO CREATE SEQUENCE [Seq] AS [int] START WITH 1 INCREMENT BY 1 MAXVALUE 20000 GO -- Run five times SELECT NEXT VALUE FOR Seq AS SeqNumber; SELECT NEXT VALUE FOR Seq AS SeqNumber; SELECT NEXT VALUE FOR Seq AS SeqNumber; SELECT NEXT VALUE FOR Seq AS SeqNumber; SELECT NEXT VALUE FOR Seq AS SeqNumber; GO -- Clean Up DROP SEQUENCE [Seq] GO Here is the resultset. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Accessing and Updating Data in ASP.NET: Filtering Data Using a CheckBoxList

    Filtering Database Data with Parameters, an earlier installment in this article series, showed how to filter the data returned by ASP.NET's data source controls. In a nutshell, the data source controls can include parameterized queries whose parameter values are defined via parameter controls. For example, the SqlDataSource can include a parameterized SelectCommand, such as: SELECT * FROM Books WHERE Price > @Price. Here, @Price is a parameter; the value for a parameter can be defined declaratively using a parameter control. ASP.NET offers a variety of parameter controls, including ones that use hard-coded values, ones that retrieve values from the querystring, and ones that retrieve values from session, and others. Perhaps the most useful parameter control is the ControlParameter, which retrieves its value from a Web control on the page. Using the ControlParameter we can filter the data returned by the data source control based on the end user's input. While the ControlParameter works well with most types of Web controls, it does not work as expected with the CheckBoxList control. The ControlParameter is designed to retrieve a single property value from the specified Web control, but the CheckBoxList control does not have a property that returns all of the values of its selected items in a form that the CheckBoxList control can use. Moreover, if you are using the selected CheckBoxList items to query a database you'll quickly find that SQL does not offer out of the box functionality for filtering results based on a user-supplied list of filter criteria. The good news is that with a little bit of effort it is possible to filter data based on the end user's selections in a CheckBoxList control. This article starts with a look at how to get SQL to filter data based on a user-supplied, comma-delimited list of values. Next, it shows how to programmatically construct a comma-delimited list that represents the selected CheckBoxList values and pass that list into the SQL query. Finally, we'll explore creating a custom parameter control to handle this logic declaratively. Read on to learn more! Read More >

    Read the article

  • Don&rsquo;t use MySQL .net connector, here is why ?

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/04/donrsquot-use-mysql-.net-connector-here-is-why.aspxIf you use .net mysql connector and all project new or old use different different version of Mysql .net connector then you need to upgrade it to latest (if you don’t use copy local=true for bin assembly). This is not the single problem happen to me.   In my case I use .net connector 6.7.4.0 and let’s see what happen to me after I start using it. 6.7.4.0 install register the mysql module in machine.config and it’s broke every software you haven’t deployed with Mysql.   Suppose for example I just create a website ( in webmatrix 3) put my index.cshtml and now see what it preview for me. This means I need to add the mysql.Web even I don’t use any kind of database. I need to do every asp.net mvc project no matter they use mysql. it’s problematic when we use older .net  mysql connector in some of my project.   If you have trouble like this simply use nuget and say Bye bye to this trouble.

    Read the article

  • Duplicate IP address detection with multiple NICs

    - by sfink
    I am using arping -D to detect duplicate IP addresses within a network when setting up servers. (The network is controlled by someone else, and we have had many issues with IP allocation in the past.) It works fine as long as my host has a single NIC on a given VLAN, but when my host has more than one (I have one with 9 NICs on one VLAN and 1 on the other), arping -D always returns false collisions. The problem is that all 9 of my NICs respond to an ARP request for any of the IPs on those NICs. (These are real physical NICs, not aliases or anything.) I send out one ARP request packet, and get 9 ARP is-at ARP replies, one for each MAC address. I could implement my own solution by sniffing packets and checking for any replies with a MAC address other than the local NICs', but it seems like there ought to be an easier way.

    Read the article

  • How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk

    - by The Geek
    We’ve covered loads of different anti-virus, Linux, and other boot disks that help you repair or recover your system, but why limit yourself to just one? Here’s how to combine your favorite repair disks together to create the ultimate repair toolkit for broken Windows systems—all on a single flash drive. The ones we’ve covered already? Here’s a quick list of all the ways you can recover your system with a rescue disk: How to Use the Avira Rescue CD to Clean Your Infected PC How to Use the BitDefender Rescue CD to Clean Your Infected PC How to Use the Kaspersky Rescue Disk to Clean Your Infected PC Change or Reset Windows Password from a Ubuntu Live CD The 10 Cleverest Ways to Use Linux to Fix Your Windows PC Change Your Forgotten Windows Password with the Linux System Rescue CD Use Ubuntu Live CD to Backup Files from Your Dead Windows Computer If you need to clean up an infected system, we’d absolutely recommend the BitDefender CD, since it’s auto-updating. Best bet? Create your ultimate boot disk with as many of the different utilities as your flash drive can hold Latest Features How-To Geek ETC How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC Luigi Installs Any OS on Google’s Cr-48 Notebook DIY iPad Stylus Offers Pen-Based Interaction on the Cheap Serene Blue Ubuntu Wallpaper for Your Desktop Enjoy Old School Style Video Game Fun with Chicken Invaders Hide the Twitter “Litter” in Twitter’s Sidebar Area (Chrome and Iron) Public Domain Day: Reflections on Copyright and the Importance of Public Domain

    Read the article

  • Ubuntu whois package and request limits

    - by Sam Hammamy
    I'm writing a django app with a form that accepts an IP and does a whois lookup on the discovered domain names. I've found the Ubuntu package whois which I plan to call from a python subprocess, and read the stdout into a StringIO, then parse for things like Registrar, Name Servers, etc. My question is, it seems that there are many paid whois services, which means that there must be a reason why people don't just use this Ubuntu package. I'm wondering if there's a request limit on the number of requests from a single IP to the package's whois server? I will probably be making 250 domain lookups per IP or maybe more. Also, I've found that some domains aren't searchable: qmul.ac.uk is searchable kat.ph is not searchable ahram.org.eg is not searchable Any particular reason for that?

    Read the article

  • Quickly Add Watermark To Multiple PDF Files Using “Batch PDF Watermark”

    - by Kavitha
    Want to add watermark to your PDF files with a single click? You can use the freeware Batch PDF Watermark. Batch PDF Watermark is super cool application that lets you add image or text watermarks to multiple files at a time. Office 2010 style ribbon user interface of the application is very easy to use and provides many options to configure watermark properties like – font styles, positioning, transparency levels, rotation of watermark image, scaling of watermark image and etc. Before running the watermark process, you can even preview it. To select multiple PDF files to watermark you can use “Add Files” option to hand pick required files or “Add Folder” option to choose all the PDF files available in the folder. Download Batch PDF Watermark [via liferocks] This article titled,Quickly Add Watermark To Multiple PDF Files Using “Batch PDF Watermark”, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • High Load mysql on Debian server

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ?

    Read the article

  • iOS: Using Jenkins for nightly internal builds (TestFlight), plus frequent client builds

    - by Amy Worrall
    I'm an iOS dev, working for a small agency. I'm currently on a few smaller projects where I'm the only developer. We recently acquired a Jenkins server, but each project is left to fend for themselves as for how to use it. I'd like to use it for making and distributing builds. My ideal would be: Every commit is built as a single IPA that is placed in a HTTP-accessible location. (It only needs to keep the latest one, otherwise the disk would fill up — some of our apps are 500MB or more.) Once a day it makes a build, signs it with our internal provisioning profile, tacks a build number onto the end of the version number, and sends it to our internal TestFlight team. When manually prompted, it builds the latest commit, gives it a manually specified version number, signs it with the client's provisioning profile and sends it to TestFlight. I'm pretty new to Jenkins. The developer who set up the server is running it on one of our projects, so I know it has the right stuff installed to do Xcode builds. I believe he's only using it to run unit tests though, not to do any of the code signing, IPA creation or TestFlight stuff. So my questions: I've listed three distinct kinds of build. How does Jenkins cope with that? I see there's a "build triggers" section in the config for a Jenkins project, but it doesn't seem to mention different types of build. Should I just set up multiple Jenkins projects, called "App X (continuous)", "App X (nightly)" and "App X (client)"? How do I specify the provisioning profile through Jenkins? If there isn't a way, I guess I could make different configurations in the Xcode project… Has anyone else used Jenkins to actually do the release (i.e. build and push to TestFlight) of beta builds of their apps? Is it a good idea? Or should I continue just doing it manually?

    Read the article

  • What is the definition of "Big Data"?

    - by Ben
    Is there one? All the definitions I can find describe the size, complexity / variety or velocity of the data. Wikipedia's definition is the only one I've found with an actual number Big data sizes are a constantly moving target, as of 2012 ranging from a few dozen terabytes to many petabytes of data in a single data set. However, this seemingly contradicts the MIKE2.0 definition, referenced in the next paragraph, which indicates that "big" data can be small and that 100,000 sensors on an aircraft creating only 3GB of data could be considered big. IBM despite saying that: Big data is more simply than a matter of size. have emphasised size in their definition. O'Reilly has stressed "volume, velocity and variety" as well. Though explained well, and in more depth, the definition seems to be a re-hash of the others - or vice-versa of course. I think that a Computer Weekly article title sums up a number of articles fairly well "What is big data and how can it be used to gain competitive advantage". But ZDNet wins with the following from 2012: “Big Data” is a catch phrase that has been bubbling up from the high performance computing niche of the IT market... If one sits through the presentations from ten suppliers of technology, fifteen or so different definitions are likely to come forward. Each definition, of course, tends to support the need for that supplier’s products and services. Imagine that. Basically "big data" is "big" in some way shape or form. What is "big"? Is it quantifiable at the current time? If "big" is unquantifiable is there a definition that does not rely solely on generalities?

    Read the article

  • SNMP based network discovery (switches), device (ports on switches) power management

    - by SaM
    In a enterprise network, what would be the right way to generate a list of switches (SNMP managed) Is it reasonable to ask the organization to supply a list such as this: Switch name IP Address of switch Location SNMP community strings Or are there standard ways to run discovery scans - UDP broadcasts? After having generated a repository such as the above; given a single switch, how to query it for the list of all devices attached to it? Finally, how to selectively power down/power up ports? (remotely - using SNMP) Platform is going to be .NET based (C#) and the library being used is SharpSNMP

    Read the article

  • ERROR 2003 (HY000): Can't connect to MySQL server on (111)

    - by JohnMerlino
    I am unable to connect to on my ubuntu installation a remote tcp/ip which contains a mysql installation: viggy@ubuntu:~$ mysql -u user.name -p -h xxx.xxx.xxx.xxx -P 3306 Enter password: ERROR 2003 (HY000): Can't connect to MySQL server on 'xxx.xxx.xxx.xxx' (111) I commented out the line below using vim in /etc/mysql/my.cnf: # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 Then I restarted the server: sudo service mysql restart But still I get the same error. This is the content of my.cnf: # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 # # Error logging goes to syslog due to /etc/mysql/conf.d/mysqld_safe_syslog.cnf. # # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ (Note that I can log into my local mysql install just fine by running mysql (and it will log me in as root) and also note that I can get into mysql in the remote server by logging into via ssh and then invoking mysql), but I am unable to connect to the remote server via my terminal using the host, and I need to do it that way so that I can then use mysql workbench.

    Read the article

  • How do you persuade users to abandon their personal folders?

    - by thing2k
    Towards the end of last year we started using Mimecast services, in particular their cloud base e-mail archiving. Since then we’ve been rolling out the Mimecast Services for Outlook (MSO) Add-in. We’ve informed the users that we will be give them training in the next few Months, and we do not require them to use it, but my boss stated that we are getting rid of Personal Folders (pst files), by putting them into Mimecast. Unsurprisingly this did cause something of a backlash. Though really who likes change. I know the IT reasons for getting rid of Personal Folders (inefficient, unreliable, single access, etc), but from an average user’s perspective, unless they have had one fail on them, they see them as simple and only way to archive e-mail when their 200Mb mailbox is full. So what can I say to the users, to get them to understand why Personal Folders are not the best solution?

    Read the article

  • Investigation: Can different combinations of components effect Dataflow performance?

    - by jamiet
    Introduction The Dataflow task is one of the core components (if not the core component) of SQL Server Integration Services (SSIS) and often the most misunderstood. This is not surprising, its an incredibly complicated beast and we’re abstracted away from that complexity via some boxes that go yellow red or green and that have some lines drawn between them. Example dataflow In this blog post I intend to look under that facade and get into some of the nuts and bolts of the Dataflow Task by investigating how the decisions we make when building our packages can affect performance. I will do this by comparing the performance of three dataflows that all have the same input, all produce the same output, but which all operate slightly differently by way of having different transformation components. I also want to use this blog post to challenge a common held opinion that I see perpetuated over and over again on the SSIS forum. That is, that people assume adding components to a dataflow will be detrimental to overall performance. Its not surprising that people think this –it is intuitive to think that more components means more work- however this is not a view that I share. I have always been of the opinion that there are many factors affecting dataflow duration and the number of components is actually one of the less important ones; having said that I have never proven that assertion and that is one reason for this investigation. I have actually seen evidence that some people think dataflow duration is simply a function of number of rows and number of components. I’ll happily call that one out as a myth even without any investigation!  The Setup I have a 2GB datafile which is a list of 4731904 (~4.7million) customer records with various attributes against them and it contains 2 columns that I am going to use for categorisation: [YearlyIncome] [BirthDate] The data file is a SSIS raw format file which I chose to use because it is the quickest way of getting data into a dataflow and given that I am testing the transformations, not the source or destination adapters, I want to minimise external influences as much as possible. In the test I will split the customers according to month of birth (12 of those) and whether or not their yearly income is above or below 50000 (2 of those); in other words I will be splitting them into 24 discrete categories and in order to do it I shall be using different combinations of SSIS’ Conditional Split and Derived Column transformation components. The 24 datapaths that occur will each input to a rowcount component, again because this is the least resource intensive means of terminating a datapath. The test is being carried out on a Dell XPS Studio laptop with a quad core (8 logical Procs) Intel Core i7 at 1.73GHz and Samsung SSD hard drive. Its running SQL Server 2008 R2 on Windows 7. The Variables Here are the three combinations of components that I am going to test:     One Conditional Split - A single Conditional Split component CSPL Split by Month of Birth and income category that will use expressions on [YearlyIncome] & [BirthDate] to send each row to one of 24 outputs. This next screenshot displays the expression logic in use: Derived Column & Conditional Split - A Derived Column component DER Income Category that adds a new column [IncomeCategory] which will contain one of two possible text values {“LessThan50000”,”GreaterThan50000”} and uses [YearlyIncome] to determine which value each row should get. A Conditional Split component CSPL Split by Month of Birth and Income Category then uses that new column in conjunction with [BirthDate] to determine which of the same 24 outputs to send each row to. Put more simply, I am separating the Conditional Split of #1 into a Derived Column and a Conditional Split. The next screenshots display the expression logic in use: DER Income Category         CSPL Split by Month of Birth and Income Category       Three Conditional Splits - A Conditional Split component that produces two outputs based on [YearlyIncome], one for each Income Category. Each of those outputs will go to a further Conditional Split that splits the input into 12 outputs, one for each month of birth (identical logic in each). In this case then I am separating the single Conditional Split of #1 into three Conditional Split components. The next screenshots display the expression logic in use: CSPL Split by Income Category         CSPL Split by Month of Birth 1& 2       Each of these combinations will provide an input to one of the 24 rowcount components, just the same as before. For illustration here is a screenshot of the dataflow containing three Conditional Split components: As you can these dataflows have a fair bit of work to do and remember that they’re doing that work for 4.7million rows. I will execute each dataflow 10 times and use the average for comparison. I foresee three possible outcomes: The dataflow containing just one Conditional Split (i.e. #1) will be quicker There is no significant difference between any of them One of the two dataflows containing multiple transformation components will be quicker Regardless of which of those outcomes come to pass we will have learnt something and that makes this an interesting test to carry out. Note that I will be executing the dataflows using dtexec.exe rather than hitting F5 within BIDS. The Results and Analysis The table below shows all of the executions, 10 for each dataflow. It also shows the average for each along with a standard deviation. All durations are in seconds. I’m pasting a screenshot because I frankly can’t be bothered with the faffing about needed to make a presentable HTML table. It is plain to see from the average that the dataflow containing three conditional splits is significantly faster, the other two taking 43% and 52% longer respectively. This seems strange though, right? Why does the dataflow containing the most components outperform the other two by such a big margin? The answer is actually quite logical when you put some thought into it and I’ll explain that below. Before progressing, a side note. The standard deviation for the “Three Conditional Splits” dataflow is orders of magnitude smaller – indicating that performance for this dataflow can be predicted with much greater confidence too. The Explanation I refer you to the screenshot above that shows how CSPL Split by Month of Birth and salary category in the first dataflow is setup. Observe that there is a case for each combination of Month Of Date and Income Category – 24 in total. These expressions get evaluated in the order that they appear and hence if we assume that Month of Date and Income Category are uniformly distributed in the dataset we can deduce that the expected number of expression evaluations for each row is 12.5 i.e. 1 (the minimum) + 24 (the maximum) divided by 2 = 12.5. Now take a look at the screenshots for the second dataflow. We are doing one expression evaluation in DER Income Category and we have the same 24 cases in CSPL Split by Month of Birth and Income Category as we had before, only the expression differs slightly. In this case then we have 1 + 12.5 = 13.5 expected evaluations for each row – that would account for the slightly longer average execution time for this dataflow. Now onto the third dataflow, the quick one. CSPL Split by Income Category does a maximum of 2 expression evaluations thus the expected number of evaluations per row is 1.5. CSPL Split by Month of Birth 1 & CSPL Split by Month of Birth 2 both have less work to do than the previous Conditional Split components because they only have 12 cases to test for thus the expected number of expression evaluations is 6.5 There are two of them so total expected number of expression evaluations for this dataflow is 6.5 + 6.5 + 1.5 = 14.5. 14.5 is still more than 12.5 & 13.5 though so why is the third dataflow so much quicker? Simple, the conditional expressions in the first two dataflows have two boolean predicates to evaluate – one for Income Category and one for Month of Birth; the expressions in the Conditional Split in the third dataflow however only have one predicate thus they are doing a lot less work. To sum up, the difference in execution times can be attributed to the difference between: MONTH(BirthDate) == 1 && YearlyIncome <= 50000 and MONTH(BirthDate) == 1 In the first two dataflows YearlyIncome <= 50000 gets evaluated an average of 12.5 times for every row whereas in the third dataflow it is evaluated once and once only. Multiply those 11.5 extra operations by 4.7million rows and you get a significant amount of extra CPU cycles – that’s where our duration difference comes from. The Wrap-up The obvious point here is that adding new components to a dataflow isn’t necessarily going to make it go any slower, moreover you may be able to achieve significant improvements by splitting logic over multiple components rather than one. Performance tuning is all about reducing the amount of work that needs to be done and that doesn’t necessarily mean use less components, indeed sometimes you may be able to reduce workload in ways that aren’t immediately obvious as I think I have proven here. Of course there are many variables in play here and your mileage will most definitely vary. I encourage you to download the package and see if you get similar results – let me know in the comments. The package contains all three dataflows plus a fourth dataflow that will create the 2GB raw file for you (you will also need the [AdventureWorksDW2008] sample database from which to source the data); simply disable all dataflows except the one you want to test before executing the package and remember, execute using dtexec, not within BIDS. If you want to explore dataflow performance tuning in more detail then here are some links you might want to check out: Inequality joins, Asynchronous transformations and Lookups Destination Adapter Comparison Don’t turn the dataflow into a cursor SSIS Dataflow – Designing for performance (webinar) Any comments? Let me know! @Jamiet

    Read the article

  • Ideas for scaling out database architecture

    - by andrew
    We're looking to scale out our existing database architecture and need some advice on which way to go. We currently have 2 web servers behind a load balancer that both read & write to a single master database which replicates to a slave. Ideally, I'd like each of the webservers to point to their own master DB and have the data between the 2 synchronised but from what I've read, using any kind of master-master or ring-replication is discouraged. I'm looking for a general "what do other people do" kind of answer - database vendor isn't a concern at the moment but we'd like to stay with MySQL or convert to MSSQL. Any ideas would be gratefully received. Many thanks, Andrew

    Read the article

  • 12.04 ext4 - cannot create regular file/No space left - with a lot of space and inodes

    - by user1434058
    This seems similar: EXT4 "No space left on device (28)" incorrect but there is no explanation I created an ext4 filesystem on a RAID 1 array with: mke2fs -t ext4 -T small /dev/md0 Copying a single directory with many tiny files I get: cp: cannot create regular file `/mnt/raid1_new/pics/pic3412.jpg': No space left on device space used 5% inodes used 1% I manually tried: cp /source/test1.jpg /mnt/raid1_new/pics/test1.jpg --- error cp /source/test1.jpg /mnt/raid1_new/pics/test2.jpg --- ERROR cp /source/test1.jpg /mnt/raid1_new/pics/test3.jpg --- no error Notes: RAID 1 disks are error free. I tried mv instead of cp and got the same thing. I tried omitting -T small with no effect Can somebody help me understand this magic?

    Read the article

  • preparing to migrate MOSS 2007 SP1 content into new server with MOSS 2007 SP2 with the same server n

    - by Albert Widjaja
    To All Experts here, I'd like to perform a MOSS server content migration from the existing single instance server (SQL Server 2008, Windows Server 2008, SharePoint 2007 SP1) all in one and by maintaining the server name only, here's what I had in my migration plan document outline: I have created the new Windows Server 2008 R2 + SQL Server 2008 SP1 and MOSS 2007 SP2 after that: Backup MOSS_ContentDB from SQL Server 2008 SSMS (does this enough to cover all top level sites and its content in the library and doc. repository ?) Backup all top level sites from CA site using the CA Sites backup and restore tools. turn off the old MOSSDEV01 (current server) rename the existing server (TempServ01) into MOSSDEV01 (so that the user bookmark and other link inside MOSS site still working) perform restore of the DB restore the site collection and its content from the backup is that the correct way to do it ? I haven't execute the server configuration wizard to create the same port number of the SSP, CA and the SQL Server DB yet. Any idea and suggestion would be greatly appreciated Thanks.

    Read the article

  • How to queue up Windows 8 file coping to only have one copying at a time

    - by Valamas
    The new windows 8 file explorer copying is great. I can setup multiple copying tasks. They appear in a single window and I am able to pause them. Is there a way to have the copying only occur one at a time and when complete to progress the next one? Currently I have to setup the file copy and pause subsequent ones, then unpause the next one when I notice the current one finishes. I am only asking about a way to queue the file explorer coping and not use alternative tools like robocopy.

    Read the article

  • Conditionally create symbolic link by filesize using find exec ubuntu 10.04

    - by jmlw
    I have an interesting problem. I'm trying to create symbolic links in a single folder, for all files in a directory which are larger than a specified size. For clarification, here is an example: /Files /Large_Files /LargeFile1_symlink /LargeFile2_symlink /Folder1 /file_a /file_b /Folder2 /LargeFile1 /Folder3 /LargeFile2 /file_c What I have so far to try to accomplish this is: find -size +102400 -exec ln -s $PWD/{} Large_Files/ \; However, this find produces ./LargeFile1 So my symlink command produces ln -s /Files/Folder2/./LargeFile1 Large_Files/ My question is, would it be possible to use the basename command to separate out only the filename so this command will work? Or does anybody have a suggestion on how to do this without writing a script, or give me an example on writing a script? I've never done scripting before, but I do know Java, but don't want to take the time to do all this in Java. Thank you for any help! Edit: adding tags

    Read the article

  • Best Practices for MVC Architecture

    - by Mystere Man
    There are a number of questions on StackOverflow regard MVC best practices, but most of those seem to revolve around things like using Dependancy Injection, or creating helper functions, or do's and don'ts of what to do in views and controllers. My question is more about how to architect an MVC application. For example, we are encouraged to use DI with the Repository pattern to decouple data access from the controller, however very little is said on HOW to do that specifically for MVC. Where would we place the Repository classes, for instance? They don't seem to be model related specifically, since the model should likewise be relatively decoupled from the actual data access technologies. A second question involves how to structure the layers or tiers. Most example applications (Nerd dinner, Music Store, etc..) all seem to use a single tier, 2 layer approach (not counting tests) that typically has controllers directly calling L2S or EF code. If I want to create a multi-tier/layer aplication what are some of the best practices there in regards to MVC? This question is one-part standard q-a, but another part best-practices, so it could go either here or programmers.se, I am marking it CW. If you feel it would be better suited to programmers.se then it can be migrated. EDIT: What happened to the Community Wiki option? It seems to be gone.

    Read the article

  • "Has Oracle written the script for CRM success?" - Anthony Lye on Customer Experience at BAFTA

    - by Richard Lefebvre
    Anthony Lye showcased Oracle Fusion CRM at a BAFTA gathering, and MyCustomer.com covered the story under the title of "Has Oracle written the script for CRM success?' According to MyCustomer.com, "Oracle's SVP of CRM Anthony Lye set the scene for the event, suggesting products are becoming commoditized, so that the only way to differentiate is through the relationship with the customer. But he warned that "customers are more and more in control of that relationship, so you have to provide great experiences for them." "The quickest win within your organization to create a single view is to connect your marketing organization with your selling organization, align goals, processes, people and technology," Anthony explained.   "And this is a transition that is already happening - "VPs of marketing have started turning up in the same meetings as VPs of sales, we have started to see that they want to work together" - but this convergence needs nurturing." "In Fusion there are capabilities to align the organisation - we enable marketing on the same platform to build campaigns connected to sales stages. It can affect leads and opportunities at the top end of the funnel. And the selling organisation can take advantage of marketing content - the materials that are exclusively within marketing can now be used by sales. Your sales teams have been campaigning forever, but it's usually by email, it isn't aligned with the corporate message and it's being sent to people it shouldn't. By aligning them we can increase output and the quality of that output." Anthony concluded: "Operating in a disconnected fashion having two distinct systems will cost you time and money. So we feel there's a material advantage in a solution like this." Enjoy the full story at http://www.mycustomer.com/topic/marketing/has-oracle-written-script-crm-success/139958

    Read the article

  • CodePlex Daily Summary for Friday, April 23, 2010

    CodePlex Daily Summary for Friday, April 23, 2010New Projects3D TagCloud for SharePoint 2010: 3D Flash TagCloud WebPart for SharePoint 2010AnyCAD.Net: AnyCAD.NetCassandraemon: Cassandraemon is LINQ Provider for Apache Cassandra.CCLI SongSelect Importer for PowerPoint: CCLI SongSelect Importer for PowerPoint ® is an Add-in for Microsoft ® PowerPoint ® that allows CCLI SongSelect (USR) files to be turned into slide...Compactar Arquivo Txt, Flat File, em Pipeline Decoder Customizado: Objetivo do projeto: Desenvolver um componente do tipo Pipeline Receiver Decoder, onde compacta o conteúdo, cria uma mensagem em XML e transforma ...Console Calculator: Console calculator is a simple, yet useful, mathematical expression calculator, supporting functions and variables. It was created to demonstrate ...CRM Dynamics Excel Importer: CRM Dynamics Excel Importercubace: The standard audio composer software with just single difference: this is CLR compilation.deneb: deneb projectDrive Backup: Drive Backup is an easy to use, automatic backup program. Simply insert a USB drive, and the program will backup either files on the drive to your ...eWebMVCCMS: this is the start of eWeb MVC CMS.Fix.ly: Small app that allows for URL rewriting before passing to the browser. Accepts MEF plugins that make themselves available by informing the applicat...GArphics: GArphics uses a genetic algorithm to produce graphics and animation with the assitance of the user.JDS Toolkit: An experimental toolkit geared to make richer applications with less effort. It will include controls such as the cubeoid and the serializedmenu. ...KrashSRC - MapleStory v.75 Emulator: KrashSRC - MapleStory v.75 EmulatorLast.fm Api: Last.fm api writen in Visaul Basic 2010.MIX 10 DVR and Downloader: A Silverlight application that will manage downloading the sessions and slide decks from the MIX '10 Conference utilizing the MIX OData feed for in...NSIS Autorun: This is a graphical CD/DVD/USB autorun engine that launches installers made with NSIS. Non-rectangular windows and animation are supported. Can be ...Pillbox: Windows Phone 7 sample application for tracking medications.PowerSharp: Very simple application that executes a snippet of PowerShell against C#. This will eventually be used with Live@EDU.Project Halosis: mmorpgProyecto Cero: Proyecto CeroSharePoint XSL Templates: This project is a place to share useful XSL templates that can be reused in SharePoint CQWPs and DVWPs.Silverlight 4.0 Popup Menu: Silverlight 4.0 Popup Menu spsearch: This project provides useful enhancements to Search using the SharePoint platform.StereoVision: StereVision es un proyecto que estudia un algoritmo de visión estereocopicaThe Stoffenmanager: The Stoffenmanager is a tool for prioritizing worker health risks to dangerous substances. But also a quantitative inhalation exposure tool and a ...Transcriber: Transcribe text from one character set to another. Extensible, plug-in based architecture. Default plug-in uses XML rules files with regular expres...Wavelets experiments: эксперименты с вейвлетамиWindows Phone 7 World of Warcraft Armory Browser: A test project to learn a little about Windows Phone development and do a decent armory browserXAML Based Chat: Xaml based chat. A simple chat systemNew Releases#Nose: SharpNose v1.1: Configuration is now done by updating SharpNose.exe.config MEF support added - you can also add your favorite test framework discovery Two tes...Baml Localizer: Version 0.1 (alpha): This is the first release which should show the capabilities of Baml Localizer. The code might still change a lot, but the file formats should be q...BibWord : Microsoft Word Citation and Bibliography styles: APA with DOI - Proof of Concept: IntroductionThis release is a proof of concept (POC) demonstrating a possible way of adding a digital object identifier (DOI) field to the APA styl...Chargify.NET: Chargify.NET v0.685: Releasing Version 0.685 - Changed customer reference ID from Guid to String for systems that don't use Guid as the unique key. - Added method for g...Compactar Arquivo Txt, Flat File, em Pipeline Decoder Customizado: SampleZipDecodePipeline: Solution contem Projeto com o Decoder Pipeline. Projeto para usar o Componente. Classes SharpZipLib para compactar e descompactar arquivosConsole Calculator: Console Calculator: Initial source code release.CSharp Intellisense: V1.6: UPDATE: 2010/04/05: description was added 2010/04/07: single selection + reset filter 20010/04/15: source code available at http://csharpintellis...Drive Backup: Drive Backup: Drive Backup allows you to automatically backup a USB device to your computer, or backup files/directories on your computer to a USB. Once you have...Event Scavenger: Thread recycling changes - Version 3.1: Change the location of where the settings for thread recycling is stored - Moved from config file to database for easier management. Version of dat...Extend SmallBasic: Teaching Extensions v.013: Added Houses QuizExtend SmallBasic: Teaching Extensions v.014: fixed a bug in Tortoise.approve rearranged the Houses Quiz to be more funFix.ly: Fix.ly 0.1: Initial test releaseFix.ly: Fix.ly 0.11: Fixed a couple bugs, including missing files in the previous releaseGArphics: Beta: This is the beta-version of the program. Version 1.0 shall be relased soon and will include a lot of improvements.HouseFly: HouseFly alpha 0.2.0.5: HouseFly alpha release 0.2.0.5HouseFly controls: HouseFly controls alpha 0.9.4: Version 0.9.4 alpha release of HouseFly controlsHTML Ruby: 6.21.8: Change Math.floor to round for text spacingHTML Shot: 0.1: Solved problems with some URLsJDS Toolkit: JDS Toolkit 0.1: Beta 0.1 version. Almost nothing in these librariesManaged Extensibility Framework: WebForms and MEF Sample: This sample demonstrates the use of these two technologies together in a non-invasive way. Information on how to use it on your own projects is inc...Microsoft - Domain Oriented N-Layered .NET 4.0 App Sample (Microsoft Spain): V0.7 - N-Layer DDD Sample App (VS.2010 RTM compat): Required Software (Microsoft Base Software needed for Development environment) Visual Studio 2010 RTM & .NET 4.0 RTM (Final Versions) Unity Applic...MvcContrib Portable Areas: Portable Areas: First Release of some portable areasNSIS Autorun: NSIS Autorun: Initial release.OgmoXNA: OgmoXNA Alpha Source Tree: Zipped version of the source tree in case you don't want to go through the SVN!Particle Plot Pivot: Particle Plot Pivot v1.0.0: Generates a Pivot collection of unpublished plots from the particle physics exeriments DZERO, CDF, ATLAS, and CMS. It can be found at http://deepta...patterns & practices SharePoint Guidance: SPG2010 Drop9: SharePoint Guidance Drop Notes Microsoft patterns and practices ****************************************** ***************************************...Rich Ajax empowered Web/Cloud Applications: 6.3.15: New Visual WebGui rich applications platform versionSilverlight 4.0 Popup Menu: PopupMenu for Silverlight 4: This is the first release of the popup menu class for Silverlight 4.0Silverlight Flow Layouts library: SL and WPF Flow Layouts library April 2010: This release introduces WPF 4.0 RTM and Silverlight 4 RTM support, as well as an additional layout algorithm and some minor bug fixes. Some changes...Spackle.NET: 3.0.0.0 Release: In this release: Spackle.dll now targets the 4.0 version of the .NET Framework SecureRandom implements IDisposable ActionExtensions have been ...Splinger FrameXi: Splinger 1.1: Welcome to a whole new way of learning! Go to release 1.0 for the non .zip packaged files.SQL Server Metadata Toolkit 2008: SQL Server Metadata Toolkit Alpha 6: This release addresses issues 10665, 10678 and 10679. The SQL Parser now understands CAST functions (the AS was causing issues), and is installed ...Star Trooper for XNA 2D Tutorial: Lesson four content: Here is Lesson four original content for the StarTrooper 2D XNA tutorial. It also includes the XNA version of Lesson four source. The blog tutori...Thales Simulator Library: Version 0.8.6: The Thales Simulator Library is an implementation of a software emulation of the Thales (formerly Zaxus & Racal) Hardware Security Module cryptogra...Transcriber: Transcriber v0.1: Initial alpha release. Very nearly useful. :-) This version includes rules files for Mode of Beleriand, Sindarin Tehtar, Quenya, and Black Speech. ...Visual Studio DSite: Picture Box Viewer (Visual F sharp 2008): A simple picturebox viewer made in visual f sharp 2008.Web/Cloud Applications Development Framework | Visual WebGui: 6.4 Beta 2d: Further stabilization of the cutting-edge web applications frameworkWebAssert: WebAssert 0.1: Initial release. Supports HTML & CSS validation using MSTest/Visual Studio testing.XAML Based Chat: Test release: A test releaseすとれおじさん(仮): すとれおじさん β 0.02: ・デザインを大幅に変更 ・まだかなり動作が重いです ・機能も少ないですMost Popular ProjectsRawrWBFS ManagerSilverlight ToolkitAJAX Control ToolkitMicrosoft SQL Server Product Samples: Databasepatterns & practices – Enterprise LibraryWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesPHPExcelMost Active Projectspatterns & practices – Enterprise LibraryRawrParticle Plot PivotBlogEngine.NETNB_Store - Free DotNetNuke Ecommerce Catalog ModuleGMap.NET - Great Maps for Windows Forms & PresentationFarseer Physics EngineDotNetZip LibraryFluent Ribbon Control SuiteN2 CMS

    Read the article

  • How to Configure OpenLDAP on Ubuntu 10.04 Server

    - by user3215
    I am following the Ubuntu server guide to configure OpenLDAP on an Ubuntu 10.04 server, but can not get it to work. When I try to use sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif I'm getting the following error: Enter LDAP Password: <entered 'secret' as password> adding new entry "dc=don,dc=com" ldap_add: Naming violation (64) additional info: value of single-valued naming attribute 'dc' conflicts with value present in entry Again when I try to do the same, I'm getting the following error: root@avy-desktop:/home/avy# sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif Enter LDAP Password: ldap_bind: Invalid credentials (49) Here is the backend.ldif file: # Load dynamic backend modules dn: cn=module,cn=config objectClass: olcModuleList cn: module olcModulepath: /usr/lib/ldap olcModuleload: back_hdb # Database settings dn: olcDatabase=hdb,cn=config objectClass: olcDatabaseConfig objectClass: olcHdbConfig olcDatabase: {1}hdb olcSuffix: dc=don,dc=com olcDbDirectory: /var/lib/ldap olcRootDN: cn=admin,dc=don,dc=com olcRootPW: secret olcDbConfig: set_cachesize 0 2097152 0 olcDbConfig: set_lk_max_objects 1500 olcDbConfig: set_lk_max_locks 1500 olcDbConfig: set_lk_max_lockers 1500 olcDbIndex: objectClass eq olcLastMod: TRUE olcDbCheckpoint: 512 30 olcAccess: to attrs=userPassword by dn="cn=admin,dc=don,dc=com" write by anonymous auth by self write by * none olcAccess: to attrs=shadowLastChange by self write by * read olcAccess: to dn.base="" by * read olcAccess: to * by dn="cn=admin,dc=don,dc=com" write by * read frontend.ldif file: # Create top-level object in domain dn: dc=don,dc=com objectClass: top objectClass: dcObject objectclass: organization o: Example Organization dc: Example description: LDAP Example # Admin user. dn: cn=admin,dc=don,dc=com objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator userPassword: secret dn: ou=people,dc=don,dc=com objectClass: organizationalUnit ou: people dn: ou=groups,dc=don,dc=com objectClass: organizationalUnit ou: groups dn: uid=john,ou=people,dc=don,dc=com objectClass: inetOrgPerson objectClass: posixAccount objectClass: shadowAccount uid: john sn: Doe givenName: John cn: John Doe displayName: John Doe uidNumber: 1000 gidNumber: 10000 userPassword: password gecos: John Doe loginShell: /bin/bash homeDirectory: /home/john shadowExpire: -1 shadowFlag: 0 shadowWarning: 7 shadowMin: 8 shadowMax: 999999 shadowLastChange: 10877 mail: [email protected] postalCode: 31000 l: Toulouse o: Example mobile: +33 (0)6 xx xx xx xx homePhone: +33 (0)5 xx xx xx xx title: System Administrator postalAddress: initials: JD dn: cn=example,ou=groups,dc=don,dc=com objectClass: posixGroup cn: example gidNumber: 10000 Can anyone help me?

    Read the article

< Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >