Search Results

Search found 337 results on 14 pages for 'hashing'.

Page 7/14 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Can per-user randomized salts be replaced with iterative hashing?

    - by Chas Emerick
    In the process of building what I'd like to hope is a properly-architected authentication mechanism, I've come across a lot of materials that specify that: user passwords must be salted the salt used should be sufficiently random and generated per-user ...therefore, the salt must be stored with the user record in order to support verification of the user password I wholeheartedly agree with the first and second points, but it seems like there's an easy workaround for the latter. Instead of doing the equivalent of (pseudocode here): salt = random(); hashedPassword = hash(salt . password); storeUserRecord(username, hashedPassword, salt); Why not use the hash of the username as the salt? This yields a domain of salts that is well-distributed, (roughly) random, and each individual salt is as complex as your salt function provides for. Even better, you don't have to store the salt in the database -- just regenerate it at authentication-time. More pseudocode: salt = hash(username); hashedPassword = hash(salt . password); storeUserRecord(username, hashedPassword); (Of course, hash in the examples above should be something reasonable, like SHA-512, or some other strong hash.) This seems reasonable to me given what (little) I know of crypto, but the fact that it's a simplification over widely-recommended practice makes me wonder whether there's some obvious reason I've gone astray that I'm not aware of.

    Read the article

  • Why use hashing to create pathnames for large collections of files?

    - by Stephen
    Hi, I noticed a number of cases where an application or database stored collections of files/blobs using a has to determine the path and filename. I believe the intended outcome is a situation where the path never gets too deep, or the folders ever get too full - too many files (or folders) in a folder making for slower access. EDIT: Examples are often Digital libraries or repositories, though the simplest example I can think of (that can be installed in about 30s) is the Zotero document/citation database. Why do this? EDIT: thanks Mat for the answer - does this technique of using a hash to create a file path have a name? Is it a pattern? I'd like to read more, but have failed to find anything in the ACM Digital Library

    Read the article

  • Do unit tests sometimes break encapsulation?

    - by user1288851
    I very often hear the following: "If you want to test private methods, you'd better put that in another class and expose it." While sometimes that's the case and we have a hiding concept inside our class, other times you end up with classes that have the same attributes (or, worst, every attribute of one class become a argument on a method in the other class) and exposes functionality that is, in fact, implementation detail. Specially on TDD, when you refactor a class with public methods out of a previous tested class, that class is now part of your interface, but has no tests to it (since you refactored it, and is a implementation detail). Now, I may be not finding an obvious better answer, but if my answer is the "correct", that means that sometimes writting unit tests can break encapsulation, and divide the same responsibility into different classes. A simple example would be testing a setter method when a getter is not actually needed for anything in the real code. Please when aswering don't provide simple answers to specific cases I may have written. Rather, try to explain more of the generic case and theoretical approach. And this is neither language specific. Thanks in advance. EDIT: The answer given by Matthew Flynn was really insightful, but didn't quite answer the question. Altough he made the fair point that you either don't test private methods or extract them because they really are other concern and responsibility (or at least that was what I could understand from his answer), I think there are situations where unit testing private methods is useful. My primary example is when you have a class that has one responsibility but the output (or input) that it gives (takes) is just to complex. For example, a hashing function. There's no good way to break a hashing function apart and mantain cohesion and encapsulation. However, testing a hashing function can be really tough, since you would need to calculate by hand (you can't use code calculation to test code calculation!) the hashing, and test multiple cases where the hash changes. In that way (and this may be a question worth of its own topic) I think private method testing is the best way to handle it. Now, I'm not sure if I should ask another question, or ask it here, but are there any better way to test such complex output (input)? OBS: Please, if you think I should ask another question on that topic, leave a comment. :)

    Read the article

  • Makefile : Build in a separate directory tree

    - by Simone Margaritelli
    My project (an interpreted language) has a standard library composed by multiple files, each of them will be built into an .so dynamic library that the interpreter will load upon user request (with an import directive). Each source file is located into a subdirectory representing its "namespace", for instance : The build process has to create a "build" directory, then when each file is compiling has to create its namespace directory inside the "build" one, for instance, when compiling std/io/network/tcp.cc he run an mkdir command with mkdir -p build/std/io/network The Makefile snippet is : STDSRC=stdlib/std/hashing/md5.cc \ stdlib/std/hashing/crc32.cc \ stdlib/std/hashing/sha1.cc \ stdlib/std/hashing/sha2.cc \ stdlib/std/io/network/http.cc \ stdlib/std/io/network/tcp.cc \ stdlib/std/io/network/smtp.cc \ stdlib/std/io/file.cc \ stdlib/std/io/console.cc \ stdlib/std/io/xml.cc \ stdlib/std/type/reflection.cc \ stdlib/std/type/string.cc \ stdlib/std/type/matrix.cc \ stdlib/std/type/array.cc \ stdlib/std/type/map.cc \ stdlib/std/type/type.cc \ stdlib/std/type/binary.cc \ stdlib/std/encoding.cc \ stdlib/std/os/dll.cc \ stdlib/std/os/time.cc \ stdlib/std/os/threads.cc \ stdlib/std/os/process.cc \ stdlib/std/pcre.cc \ stdlib/std/math.cc STDOBJ=$(STDSRC:.cc=.so) all: stdlib stdlib: $(STDOBJ) .cc.so: mkdir -p `dirname $< | sed -e 's/stdlib/stdlib\/build/'` $(CXX) $< -o `dirname $< | sed -e 's/stdlib/stdlib\/build/'`/`basename $< .cc`.so $(CFLAGS) $(LDFLAGS) I have two questions : 1 - The problem is that the make command, i really don't know why, doesn't check if a file was modified and launch the build process on ALL the files no matter what, so if i need to build only one file, i have to build them all or use the command : make path/to/single/file.so Is there any way to solve this? 2 - Any way to do this in a "cleaner" way without have to distribute all the build directories with sources? Thanks

    Read the article

  • Solved: Chrome v18, self signed certs and &ldquo;signed using a weak signature algorithm&rdquo;

    - by David Christiansen
    So chrome has just updated itself automatically and you are now running v18 – great. Or is it… If like me, you are someone that are running sites using a self-signed SSL Certificate (i.e. when running a site on a developer machine) you may come across the following lovely message; Fear not, this is likely as a result of you following instructions you found on the apache openssl site which results in a self signed cert using the MD5 signature hashing algorithm. Using OpenSSL The simple fix is to generate a new certificate specifying to use the SHA512 signature hashing algorithm, like so; openssl req -new -x509 -sha512 -nodes -out server.crt -keyout server.key Simples! Now, you should be able to confirm the signature algorithm used is sha512 by looking at the details tab of certificate Notes If you change your certificate, be sure to reapply any private key permissions you require – such as allowing access to the application pool user.

    Read the article

  • Solved: Chrome v18, self signed certs and &ldquo;signed using a weak signature algorithm&rdquo;

    - by David Christiansen
    So chrome has just updated itself automatically and you are now running v18 – great. Or is it… If like me, you are someone that are running sites using a self-signed SSL Certificate (i.e. when running a site on a developer machine) you may come across the following lovely message; Fear not, this is likely as a result of you following instructions you found on the apache openssl site which results in a self signed cert using the MD5 signature hashing algorithm. The simple fix is to generate a new certificate specifying to use the SHA1 signature hashing algorithm, like so; openssl req -new -x509 -sha1 -nodes -out server.crt -keyout server.key Simples!

    Read the article

  • Authorization design-pattern / practice?

    - by Lawtonfogle
    On one end, you have users. On the other end, you have activities. I was wondering if there is a best practice to relate the two. The simplest way I can think of is to have every activity have a role, and assign every user every role they need. The problem is that this gets really messy in practice as soon as you go beyond a trivial system. A way I recently designed was to have users who have roles, and roles have privileges, and activities require some combinations of privileges. For the trivial case, this is more complex, but I think it will scale better. But after I implemented it, I felt like it was overkill for the system I had. Another option would be to have users, who have roles, and activities require you to have a certain role to perform with many activities sharing roles. A more complex variant of this would given activities many possible roles, which you only needed one of. And an even more complex variant would be to allow logical statements of role ownership to use an activity (i.e. Must have A and (B exclusive or C) and must not have D). I could continue to list more, but I think this already gives a picture. And many of these have trade offs. But in software design, there are oftentimes solutions, while perhaps not perfect in every possible case, are clearly top of the pack to an extent it isn't even considered opinion based (i.e. how to store passwords, plain text is worse, hashing better, hashing and salt even better, despite the increased complexity of each level) (i.e. 2, Smart UI designs for applications are bad, even if it is subjective as to what the best design is). So, is there a best practice for authorization design that is not purely opinion based/subjective?

    Read the article

  • PBKDF2-HMAC-SHA1

    - by Jason
    To generate a valid pairwise master key for a WPA2 network a router uses the PBKDF2-HMAC-SHA1 algorithm. I understand that the sha1 function is performed 4096 times to derive the PMK, however I have two questions about the process. Excuse the pseudo code. 1) How is the input to the first instance of the SHA1 function formatted? SHA1("network_name"+"network_name_length"+"network_password") Is it formatted in that order, is it the hex value of the network name, length and password or straight ASCII? Then from what I gather the 160 bit digest received is fed straight into another round of hashing without any additional salting. Like this: SHA1("160bit digest from last round of hashing") Rise and repeat. 2) Once this occurs 4096 times 256 bits of the output is used as the pairwise master key. What I don't understand is that if SHA1 produces 160bit output, how does the algorithm arrive at the 256bits required for a key? Thanks for the help.

    Read the article

  • Good way to find duplicate files?

    - by OverTheRainbow
    Hello I don't know enough about VB.Net (2008, Express Edition) yet, so I wanted to ask if there were a better way to find files with different names but the same contents, ie. duplicates. In the following code, I use GetFiles() to retrieve all the files in a given directory, and for each file, use MD5 to hash its contents, check if this value already lives in a dictionary: If yes, it's a duplicate and I'll delete it; If not, I add this filename/hashvalue into the dictionary for later: 'Get all files from directory Dim currfile As String For Each currfile In Directory.GetFiles("C:\MyFiles\", "File.*") 'Check if hashing already found as value, ie. duplicate If StoreItem.ContainsValue(ReadFileMD5(currfile)) Then 'Delete duplicate 'This hashing not yet found in dictionary -> add it Else StoreItem.Add(currfile, ReadFileMD5(currfile)) End If Next Is this a good way to solve the issue of finding duplicates, or is there a better way I should know about? Thank you.

    Read the article

  • Hash Table: Should I increase the element count on collisions?

    - by Nazgulled
    Hi, Right now my hash tables count the number of every element inserted into the hash table. I use this count, with the total hash table size, to calculate the load factor and when it reaches like 70%, I rehash it. I was thinking that maybe I should only count the inserted elements with fills an empty slot instead of all of them. Cause the collision method I'm using is separate chaining. The factor load keeps increasing but if there can be a few collisions leaving lots of empty slots on the hash table. You are probably thinking that if I have that many collisions, maybe I'm not using the best hashing method. But that's not the point, I'm using one of the know hashing algorithms out there, I tested 3 of them on my sample data and selected the one who produced less collisions. My question still remains. Should I keep counting every element inserted, or just the ones that fill an empty slot in the Hash Table?

    Read the article

  • Password reset by email without a database table

    - by jpatokal
    The normal flow for resetting a user's password by mail is this: Generate a random string and store it in a database table Email string to user User clicks on link containing string String is validated against database; if it matches, user's pw is reset However, maintaining a table and expiring old strings etc seems like a bit of an unnecessary hassle. Are there any obvious flaws in this alternative approach? Generate a MD5 hash of the user's existing password Email hash string to user User clicks on link containing string String is validated by hashing existing pw again; if it matches, user's pw is reset Note that the user's password is already stored in a hashed and salted form, and I'm just hashing it once more to get a unique but repeatable string. And yes, there is one obvious "flaw": the reset link thus generated will not expire until the user changes their password (clicks the link). I don't really see why this would be a problem though -- if the mailbox is compromised, the user is screwed anyway.

    Read the article

  • Is it faster to loop through a Python set of number or a set of letters?

    - by Scott Bartell
    Is it faster to loop through a Python set of numbers or a Python set of letters given that each set is the exact same length and each item within each set is the same length? Why? I would think that there would be a difference because letters have more possible characters [a-zA-Z] than numbers [0-9] and therefor would be more 'random' and likely affect the hashing to some extent. numbers = set([00000,00001,00002,00003,00004,00005, ... 99999]) letters = set(['aaaaa','aaaab','aaaac','aaaad', ... 'aaabZZ']) # this is just an example, it does not actually end here for item in numbers: do_something() for item in letters: do_something() where len(numbers) == len(letters) Update: I am interested in Python's specific hashing algorithm and what happens behind the scenes with this implementation.

    Read the article

  • 'pip install carbon' looks like it works, but pip disagrees afterward

    - by fennec
    I'm trying to use pip to install the package carbon, a package related to statistics collection. When I run pip install carbon, it looks like everything works. However, pip is unconvinced that the package is actually installed. (This ultimately causes trouble because I'm using Puppet, and have a rule to install carbon using pip, and when puppet asks pip "is this package installed?" it says "no" and it reinstalls it again.) How do I figure out what's preventing pip from recognizing the success of this installation? Here is the output of the regular install: root@statsd:/opt/graphite# pip install carbon Downloading/unpacking carbon Downloading carbon-0.9.9.tar.gz Running setup.py egg_info for package carbon package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) Requirement already satisfied (use --upgrade to upgrade): twisted in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): txamqp in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): zope.interface in /usr/local/lib/python2.7/dist-packages (from twisted->carbon) Requirement already satisfied (use --upgrade to upgrade): distribute in /usr/local/lib/python2.7/dist-packages (from zope.interface->twisted->carbon) Installing collected packages: carbon Running setup.py install for carbon package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) changing mode of build/scripts-2.7/validate-storage-schemas.py from 664 to 775 changing mode of build/scripts-2.7/carbon-aggregator.py from 664 to 775 changing mode of build/scripts-2.7/carbon-cache.py from 664 to 775 changing mode of build/scripts-2.7/carbon-relay.py from 664 to 775 changing mode of build/scripts-2.7/carbon-client.py from 664 to 775 changing mode of /opt/graphite/bin/validate-storage-schemas.py to 775 changing mode of /opt/graphite/bin/carbon-aggregator.py to 775 changing mode of /opt/graphite/bin/carbon-cache.py to 775 changing mode of /opt/graphite/bin/carbon-relay.py to 775 changing mode of /opt/graphite/bin/carbon-client.py to 775 Successfully installed carbon Cleaning up... root@statsd:/opt/graphite# pip freeze | grep carbon root@statsd: Here is the verbose version of the install: root@statsd:/opt/graphite# pip install carbon -v Downloading/unpacking carbon Using version 0.9.9 (newest of versions: 0.9.9, 0.9.9, 0.9.8, 0.9.7, 0.9.6, 0.9.5) Downloading carbon-0.9.9.tar.gz Running setup.py egg_info for package carbon running egg_info creating pip-egg-info/carbon.egg-info writing requirements to pip-egg-info/carbon.egg-info/requires.txt writing pip-egg-info/carbon.egg-info/PKG-INFO writing top-level names to pip-egg-info/carbon.egg-info/top_level.txt writing dependency_links to pip-egg-info/carbon.egg-info/dependency_links.txt writing manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' warning: manifest_maker: standard file '-c' not found package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) reading manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' writing manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' Requirement already satisfied (use --upgrade to upgrade): twisted in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): txamqp in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): zope.interface in /usr/local/lib/python2.7/dist-packages (from twisted->carbon) Requirement already satisfied (use --upgrade to upgrade): distribute in /usr/local/lib/python2.7/dist-packages (from zope.interface->twisted->carbon) Installing collected packages: carbon Running setup.py install for carbon running install running build running build_py creating build creating build/lib.linux-i686-2.7 creating build/lib.linux-i686-2.7/carbon copying lib/carbon/amqp_publisher.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/manhole.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/instrumentation.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/cache.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/management.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/relayrules.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/events.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/protocols.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/conf.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/rewrite.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/hashing.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/writer.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/client.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/util.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/service.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/amqp_listener.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/routers.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/storage.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/log.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/__init__.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/state.py -> build/lib.linux-i686-2.7/carbon creating build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/receiver.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/rules.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/buffers.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/__init__.py -> build/lib.linux-i686-2.7/carbon/aggregator package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) creating build/lib.linux-i686-2.7/twisted creating build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_relay_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_aggregator_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_cache_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/carbon/amqp0-8.xml -> build/lib.linux-i686-2.7/carbon running build_scripts creating build/scripts-2.7 copying and adjusting bin/validate-storage-schemas.py -> build/scripts-2.7 copying and adjusting bin/carbon-aggregator.py -> build/scripts-2.7 copying and adjusting bin/carbon-cache.py -> build/scripts-2.7 copying and adjusting bin/carbon-relay.py -> build/scripts-2.7 copying and adjusting bin/carbon-client.py -> build/scripts-2.7 changing mode of build/scripts-2.7/validate-storage-schemas.py from 664 to 775 changing mode of build/scripts-2.7/carbon-aggregator.py from 664 to 775 changing mode of build/scripts-2.7/carbon-cache.py from 664 to 775 changing mode of build/scripts-2.7/carbon-relay.py from 664 to 775 changing mode of build/scripts-2.7/carbon-client.py from 664 to 775 running install_lib copying build/lib.linux-i686-2.7/carbon/amqp_publisher.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/manhole.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/amqp0-8.xml -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/instrumentation.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/cache.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/management.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/relayrules.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/events.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/protocols.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/conf.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/rewrite.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/hashing.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/writer.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/client.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/util.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/aggregator/receiver.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/rules.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/buffers.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/__init__.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/service.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/amqp_listener.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/routers.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/storage.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/log.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/__init__.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/state.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/twisted/plugins/carbon_relay_plugin.py -> /opt/graphite/lib/twisted/plugins copying build/lib.linux-i686-2.7/twisted/plugins/carbon_aggregator_plugin.py -> /opt/graphite/lib/twisted/plugins copying build/lib.linux-i686-2.7/twisted/plugins/carbon_cache_plugin.py -> /opt/graphite/lib/twisted/plugins byte-compiling /opt/graphite/lib/carbon/amqp_publisher.py to amqp_publisher.pyc byte-compiling /opt/graphite/lib/carbon/manhole.py to manhole.pyc byte-compiling /opt/graphite/lib/carbon/instrumentation.py to instrumentation.pyc byte-compiling /opt/graphite/lib/carbon/cache.py to cache.pyc byte-compiling /opt/graphite/lib/carbon/management.py to management.pyc byte-compiling /opt/graphite/lib/carbon/relayrules.py to relayrules.pyc byte-compiling /opt/graphite/lib/carbon/events.py to events.pyc byte-compiling /opt/graphite/lib/carbon/protocols.py to protocols.pyc byte-compiling /opt/graphite/lib/carbon/conf.py to conf.pyc byte-compiling /opt/graphite/lib/carbon/rewrite.py to rewrite.pyc byte-compiling /opt/graphite/lib/carbon/hashing.py to hashing.pyc byte-compiling /opt/graphite/lib/carbon/writer.py to writer.pyc byte-compiling /opt/graphite/lib/carbon/client.py to client.pyc byte-compiling /opt/graphite/lib/carbon/util.py to util.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/receiver.py to receiver.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/rules.py to rules.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/buffers.py to buffers.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/__init__.py to __init__.pyc byte-compiling /opt/graphite/lib/carbon/service.py to service.pyc byte-compiling /opt/graphite/lib/carbon/amqp_listener.py to amqp_listener.pyc byte-compiling /opt/graphite/lib/carbon/routers.py to routers.pyc byte-compiling /opt/graphite/lib/carbon/storage.py to storage.pyc byte-compiling /opt/graphite/lib/carbon/log.py to log.pyc byte-compiling /opt/graphite/lib/carbon/__init__.py to __init__.pyc byte-compiling /opt/graphite/lib/carbon/state.py to state.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_relay_plugin.py to carbon_relay_plugin.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_aggregator_plugin.py to carbon_aggregator_plugin.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_cache_plugin.py to carbon_cache_plugin.pyc running install_data copying conf/storage-schemas.conf.example -> /opt/graphite/conf copying conf/rewrite-rules.conf.example -> /opt/graphite/conf copying conf/relay-rules.conf.example -> /opt/graphite/conf copying conf/carbon.amqp.conf.example -> /opt/graphite/conf copying conf/aggregation-rules.conf.example -> /opt/graphite/conf copying conf/carbon.conf.example -> /opt/graphite/conf running install_egg_info running egg_info creating lib/carbon.egg-info writing requirements to lib/carbon.egg-info/requires.txt writing lib/carbon.egg-info/PKG-INFO writing top-level names to lib/carbon.egg-info/top_level.txt writing dependency_links to lib/carbon.egg-info/dependency_links.txt writing manifest file 'lib/carbon.egg-info/SOURCES.txt' warning: manifest_maker: standard file '-c' not found reading manifest file 'lib/carbon.egg-info/SOURCES.txt' writing manifest file 'lib/carbon.egg-info/SOURCES.txt' removing '/opt/graphite/lib/carbon-0.9.9-py2.7.egg-info' (and everything under it) Copying lib/carbon.egg-info to /opt/graphite/lib/carbon-0.9.9-py2.7.egg-info running install_scripts copying build/scripts-2.7/validate-storage-schemas.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-aggregator.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-cache.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-relay.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-client.py -> /opt/graphite/bin changing mode of /opt/graphite/bin/validate-storage-schemas.py to 775 changing mode of /opt/graphite/bin/carbon-aggregator.py to 775 changing mode of /opt/graphite/bin/carbon-cache.py to 775 changing mode of /opt/graphite/bin/carbon-relay.py to 775 changing mode of /opt/graphite/bin/carbon-client.py to 775 writing list of installed files to '/tmp/pip-9LuJTF-record/install-record.txt' Successfully installed carbon Cleaning up... Removing temporary dir /opt/graphite/build... root@statsd:/opt/graphite# For reference, this is pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7)

    Read the article

  • Memcache key generation strategy

    - by Maxim Veksler
    Given function f1 which receives n String arguments, would be considered better random key generation strategy for memcache for the scenario described below ? Our Memcache client does internal md5sum hashing on the keys it gets public class MemcacheClient { public Object get(String key) { String md5 = Md5sum.md5(key) // Talk to memcached to get the Serialization... return memcached(md5); } } First option public static String f1(String s1, String s2, String s3, String s4) { String key = s1 + s2 + s3 + s4; return get(key); } Second option /** * Calculate hash from Strings * * @param objects vararg list of String's * * @return calculated md5sum hash */ public static String stringHash(Object... strings) { if(strings == null) throw new NullPointerException("D'oh! Can't calculate hash for null"); MD5 md5sum = new MD5(); // if(prevHash != null) // md5sum.Update(prevHash); for(int i = 0; i < strings.length; i++) { if(strings[i] != null) { md5sum.Update("_" + strings[i] + "_"); // Convert to String... } else { // If object is null, allow minimum entropy by hashing it's position md5sum.Update("_" + i + "_"); } } return md5sum.asHex(); } public static String f1(String s1, String s2, String s3, String s4) { String key = stringHash(s1, s2, s3, s4); return get(key); } Note that the possible problem with the second option is that we are doing second md5sum (in the memcache client) on an already md5sum'ed digest result. Thanks for reading, Maxim.

    Read the article

  • Help needed in grokking password hashes and salts

    - by javafueled
    I've read a number of SO questions on this topic, but grokking the applied practice of storing a salted hash of a password eludes me. Let's start with some ground rules: a password, "foobar12" (we are not discussing the strength of the password). a language, Java 1.6 for this discussion a database, postgreSQL, MySQL, SQL Server, Oracle Several options are available to storing the password, but I want to think about one (1): Store the password hashed with random salt in the DB, one column Found on SO and elsewhere is the automatic fail of plaintext, MD5/SHA1, and dual-columns. The latter have pros and cons MD5/SHA1 is simple. MessageDigest in Java provides MD5, SHA1 (through SHA512 in modern implementations, certainly 1.6). Additionally, most RDBMSs listed provide methods for MD5 encryption functions on inserts, updates, etc. The problems become evident once one groks "rainbow tables" and MD5 collisions (and I've grokked these concepts). Dual-column solutions rest on the idea that the salt does not need to be secret (grok it). However, a second column introduces a complexity that might not be a luxury if you have a legacy system with one (1) column for the password and the cost of updating the table and the code could be too high. But it is storing the password hashed with a random salt in single DB column that I need to understand better, with practical application. I like this solution for a couple of reasons: a salt is expected and considers legacy boundaries. Here's where I get lost: if the salt is random and hashed with the password, how can the system ever match the password? I have theory on this, and as I type I might be grokking the concept: Given a random salt of 128 bytes and a password of 8 bytes ('foobar12'), it could be programmatically possible to remove the part of the hash that was the salt, by hashing a random 128 byte salt and getting the substring of the original hash that is the hashed password. Then re hashing to match using the hash algorithm...??? So... any takers on helping. :) Am I close?

    Read the article

  • Password security; Is this safe?

    - by Camran
    I asked a question yesterday about password safety... I am new at security... I am using a mysql db, and need to store users passwords there. I have been told in answers that hashing and THEN saving the HASHED value of the password is the correct way of doing this. So basically I want to verify with you guys this is correct now. It is a classifieds website, and for each classified the user puts, he has to enter a password so that he/she can remove the classified using that password later on (when product is sold for example). In a file called "put_ad.php" I use the $_POST method to fetch the pass from a form. Then I hash it and put it into a mysql table. Then whenever the users wants to delete the ad, I check the entered password by hashing it and comparing the hashed value of the entered passw against the hashed value in the mysql db, right? BUT, what if I as an admin want to delete a classified, is there a method to "Unhash" the password easily? sha1 is used currently btw. some code is very much appreciated. Thanks

    Read the article

  • How to map string keys to unique integer IDs?

    - by Marek
    I have some data that comes regularily as a dump from a data souce with a string natural key that is long (up to 60 characters) and not relevant to the end user. I am using this key in a url. This makes urls too long and user unfriendly. I would like to transform the string keys into integers with the following requirements: The source dataset will change over time. The ID should be: non negative integer unique and constant even if the set of input keys changes preferrably reversible back to key (not a strong requirement) The database is rebuilt from scratch every time so I can not remember the already assigned IDs and match the new data set to existing IDs and generate sequential IDs for the added keys. There are currently around 30000 distinct keys and the set is constantly growing. How to implement a function that will map string keys to integer IDs? What I have thought about: 1. Built-in string.GetHashCode: ID(key) = Math.Abs(key.GetHashCode()) is not guaranteed to be unique (not reversible) 1.1 "Re-hashing" the built-in GetHashCode until a unique ID is generated to prevent collisions. existing IDs may change if something colliding is added to the beginning of the input data set 2. a perfect hashing function I am not sure if this can generate constant IDs if the set of inputs changes (not reversible) 3. translate to base 36/64/?? does not shorten the long keys enough What are the other options?

    Read the article

  • Separating code logic from the actual data structures. Best practices?

    - by Patrick
    I have an application that loads lots of data into memory (this is because it needs to perform some mathematical simulation on big data sets). This data comes from several database tables, that all refer to each other. The consistency rules on the data are rather complex, and looking up all the relevant data requires quite some hashes and other additional data structures on the data. Problem is that this data may also be changed interactively by the user in a dialog. When the user presses the OK button, I want to perform all the checks to see that he didn't introduce inconsistencies in the data. In practice all the data needs to be checked at once, so I cannot update my data set incrementally and perform the checks one by one. However, all the checking code work on the actual data set loaded in memory, and use the hashing and other data structures. This means I have to do the following: Take the user's changes from the dialog Apply them to the big data set Perform the checks on the big data set Undo all the changes if the checks fail I don't like this solution since other threads are also continuously using the data set, and I don't want to halt them while performing the checks. Also, the undo means that the old situation needs to be put aside, which is also not possible. An alternative is to separate the checking code from the data set (and let it work on explicitly given data, e.g. coming from the dialog) but this means that the checking code cannot use hashing and other additional data structures, because they only work on the big data set, making the checks much slower. What is a good practice to check user's changes on complex data before applying them to the 'application's' data set?

    Read the article

  • How to run commands when mac is idle and when it resumes

    - by tig
    I want to run script when my mac is idle (for example after 5 minutes or screen saver start time is also ok) and when I resume it from idle state. I know that I can write daemon using NSDistributedNotificationCenter and com.apple.screenIsLocked and com.apple.screenIsUnlocked, but I hope that there is already solution without creating new daemon. I need this to for example turn on/off speed limit for Transmission (as it is sometimes hard to work when hashing/downloading on full speed).

    Read the article

  • Hashed pattern in bargraphs - MS excel

    - by user1189851
    I am drawing graphs for a paper that supports only black and white graphs. I need to show more than 3 histograms and want to have different patterns on them like hash, dotted, double hashed etc instead of different colors in the legend. I am using MS Excel 2007. I tried but dont find a way except for the option available in design tab that I find when I double click on the chart area( These are shades of grey color and I want patterns like hashing, dots etc). Thanks in advance,

    Read the article

  • Salt and hash a password in .NET

    - by Jon Canning
    I endeavoured to follow the CrackStation rules: Salted Password Hashing - Doing it Right    public class SaltedHash     {         public string Hash { get; private set; }         public string Salt { get; private set; }         public SaltedHash(string password)         {             var saltBytes = new byte[32];             new RNGCryptoServiceProvider().GetNonZeroBytes(saltBytes);             Salt = ConvertToBase64String(saltBytes);             var passwordAndSaltBytes = Concat(password, saltBytes);             Hash = ComputeHash(passwordAndSaltBytes);         }         static string ConvertToBase64String(byte[] bytes)         {             return Convert.ToBase64String(bytes);         }         static string ComputeHash(byte[] bytes)         {             return ConvertToBase64String(SHA256.Create().ComputeHash(bytes));         }         static byte[] Concat(string password, byte[] saltBytes)         {             var passwordBytes = Encoding.UTF8.GetBytes(password);             return passwordBytes.Concat(saltBytes).ToArray();         }         public static bool Verify(string salt, string hash, string password)         {             var saltBytes = Convert.FromBase64String(salt);             var passwordAndSaltBytes = Concat(password, saltBytes);             var hashAttempt = ComputeHash(passwordAndSaltBytes);             return hash == hashAttempt;         }     }

    Read the article

  • PHP security regarding login

    - by piers
    I have read a lot about PHP login security recently, but many questions on Stack Overflow regarding security are outdated. I understand bcrypt is one of the best ways of hashing passwords today. However, for my site, I believe sha512 will do very well, at least to begin with. (I mean bcrypt is for bigger sites, sites that require high security, right?) I´m also wonder about salting. Is it necessary for every password to have its own unique salt? Should I have one field for the salt and one for the password in my database table? What would be a decent salt today? Should I join the username together with the password and add a random word/letter/special character combination to it? Thanks for your help!

    Read the article

  • Question about the no-endorsment clause on the BSD license

    - by Earlz
    I'm developing a non-free library and I want to use Bcrypt.Net in it. The clause in question: Neither the name of BCrypt.Net nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. To what extent does this mean I can't use the name of Bcrypt.Net? For instance, could I say "the only ASP.Net authentication library capable of using Bcrypt" or can I even include "supports Bcrypt for password hashing" in promotional materials? Note: I do not actually modify any of Bcrypt.Net's code

    Read the article

  • Proper password handling for login

    - by piers
    I have read a lot about PHP login security recently, but many questions on Stack Overflow regarding security are outdated. I understand bcrypt is one of the best ways of hashing passwords today. However, for my site, I believe sha512 will do very well, at least to begin with. (I mean bcrypt is for bigger sites, sites that require high security, right?) I´m also wonder about salting. Is it necessary for every password to have its own unique salt? Should I have one field for the salt and one for the password in my database table? What would be a decent salt today? Should I join the username together with the password and add a random word/letter/special character combination to it? Thanks for your help!

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >