Search Results

Search found 23347 results on 934 pages for 'key storage'.

Page 264/934 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • Learning AES: the KeyBytes

    - by Tom Brito
    I got the following example from here: import java.security.Security; import javax.crypto.Cipher; import javax.crypto.spec.SecretKeySpec; public class MainClass { public static void main(String[] args) throws Exception { Security.addProvider(new org.bouncycastle.jce.provider.BouncyCastleProvider()); byte[] input = "www.java2s.com".getBytes(); byte[] keyBytes = new byte[] { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17 }; SecretKeySpec key = new SecretKeySpec(keyBytes, "AES"); Cipher cipher = Cipher.getInstance("AES/ECB/PKCS7Padding", "BC"); System.out.println(new String(input)); // encryption pass cipher.init(Cipher.ENCRYPT_MODE, key); byte[] cipherText = new byte[cipher.getOutputSize(input.length)]; int ctLength = cipher.update(input, 0, input.length, cipherText, 0); ctLength += cipher.doFinal(cipherText, ctLength); System.out.println(new String(cipherText)); System.out.println(ctLength); // decryption pass cipher.init(Cipher.DECRYPT_MODE, key); byte[] plainText = new byte[cipher.getOutputSize(ctLength)]; int ptLength = cipher.update(cipherText, 0, ctLength, plainText, 0); ptLength += cipher.doFinal(plainText, ptLength); System.out.println(new String(plainText)); System.out.println(ptLength); } } I imagine that the byte[] keyBytes should be random generated, so I gone to test the max size before do it. When adding one more byte 0x18 to the array, the exception raised: InvalidKeyException: Key length not 128/192/256 bits. But the original 18 bytes (from 0 to 17) are not multiple of nither 128, 192 or 256. I would like to understand the math here.. can anyone explain me? Thanks!

    Read the article

  • How to delete duplicate records in MySQL by retaining those fields with data in the duplicate item b

    - by NJTechGuy
    I have few thousands of records with few 100 fields in a MySQL Table. Some records are duplicates and are marked as such. Now while I can simply delete the dupes, I want to retain any other possible valuable non-null data which is not present in the original version of the record. Hope I made sense. For instance : a b c d e f key dupe -------------------- 1 d c f k l 1 x 2 g h j 1 3 i h u u 2 4 u r t 2 x From the above sample table, the desired output is : a b c d e f key dupe -------------------- 2 g c h k j 1 3 i r h u u 2 If you look at it closely, the duplicate is determined by using the key (it is the same for 2 records, so the one that has an 'x' for dupe field is the one to be deleted by retaining some of the fields from the dupe (like c, e values for key 1). Please let me know if you need more info about this puzzling problem. Thanks a tonne! p.s : If it is not possible using MySQL, a PERL/Python script sample would be awesome! Thanks!

    Read the article

  • Invalidating Memcached Keys on save() in Django

    - by Zack
    I've got a view in Django that uses memcached to cache data for the more highly trafficked views that rely on a relatively static set of data. The key word is relatively: I need invalidate the memcached key for that particular URL's data when it's changed in the database. To be as clear as possible, here's the meat an' potatoes of the view (Person is a model, cache is django.core.cache.cache): def person_detail(request, slug): if request.is_ajax(): cache_key = "%s_ABOUT_%s" % settings.SITE_PREFIX, slug # Check the cache to see if we've already got this result made. json_dict = cache.get(cache_key) # Was it a cache hit? if json_dict is None: # That's a negative Ghost Rider person = get_object_or_404(Person, display = True, slug = slug) json_dict = { 'name' : person.name, 'bio' : person.bio_html, 'image' : person.image.extra_thumbnails['large'].absolute_url, } cache.set(cache_key) # json_dict will now exist, whether it's from the cache or not response = HttpResponse() response['Content-Type'] = 'text/javascript' response.write(simpljson.dumps(json_dict)) # Make sure it's all properly formatted for JS by using simplejson return response else: # This is where the fully templated response is generated What I want to do is get at that cache_key variable in it's "unformatted" form, but I'm not sure how to do this--if it can be done at all. Just in case there's already something to do this, here's what I want to do with it (this is from the Person model's hypothetical save method) def save(self): # If this is an update, the key will be cached, otherwise it won't, let's see if we can't find me try: old_self = Person.objects.get(pk=self.id) cache_key = # Voodoo magic to get that variable old_key = cache_key.format(settings.SITE_PREFIX, old_self.slug) # Generate the key currently cached cache.delete(old_key) # Hit it with both barrels of rock salt # Turns out this doesn't already exist, let's make that first request even faster by making this cache right now except DoesNotExist: # I haven't gotten to this yet. super(Person, self).save() I'm thinking about making a view class for this sorta stuff, and having functions in it like remove_cache or generate_cache since I do this sorta stuff a lot. Would that be a better idea? If so, how would I call the views in the URLconf if they're in a class?

    Read the article

  • How do you create a non-Thread-based Guice custom Scope?

    - by Russ
    It seems that all Guice's out-of-the-box Scope implementations are inherently Thread-based (or ignore Threads entirely): Scopes.SINGLETON and Scopes.NO_SCOPE ignore Threads and are the edge cases: global scope and no scope. ServletScopes.REQUEST and ServletScopes.SESSION ultimately depend on retrieving scoped objects from a ThreadLocal<Context>. The retrieved Context holds a reference to the HttpServletRequest that holds a reference to the scoped objects stored as named attributes (where name is derived from com.google.inject.Key). Class SimpleScope from the custom scope Guice wiki also provides a per-Thread implementation using a ThreadLocal<Map<Key<?>, Object>> member variable. With that preamble, my question is this: how does one go about creating a non-Thread-based Scope? It seems that something that I can use to look up a Map<Key<?>, Object> is missing, as the only things passed in to Scope.scope() are a Key<T> and a Provider<T>. Thanks in advance for your time.

    Read the article

  • iphone encryption not working

    - by Futur
    I have this encryption/decryption implemented and executes with no error, but i am unable to decrypt the data and im not sure whether the data is encrypted, kindly help. - (NSData *)AES256EncryptWithKey:(NSString *)key { char keyPtr[kCCKeySizeAES256+1]; bzero(keyPtr, sizeof(keyPtr)); [key getCString:keyPtr maxLength:sizeof(keyPtr) encoding:NSUTF8StringEncoding]; NSUInteger dataLength = [strData length]; data = [[NSData alloc] init]; data = [strData dataUsingEncoding:NSUTF8StringEncoding]; size_t bufferSize = dataLength + kCCBlockSizeAES128; void *pribuffer = malloc(bufferSize); size_t numBytesEncrypted = 0; CCCryptorStatus cryptStatus = CCCrypt(kCCEncrypt, kCCAlgorithmAES128, kCCOptionPKCS7Padding, keyPtr, kCCKeySizeAES256, NULL /* initialization vector (optional) */, [data bytes], dataLength, /* input */ [data bytes], bufferSize, /* output */ &numBytesEncrypted); if (cryptStatus == kCCSuccess) { return [NSData dataWithBytesNoCopy:data length:numBytesEncrypted]; }} The decryption is : -(NSData *)AES256DecryptWithKey:(NSString *)key andForData:(NSData *)objDataObject { char keyPtr[kCCKeySizeAES256+1]; bzero(keyPtr, sizeof(keyPtr)); [key getCString:keyPtr maxLength:sizeof(keyPtr) encoding:NSUTF8StringEncoding]; NSUInteger dataLength = [objDataObject length]; size_t bufferSize = dataLength + kCCBlockSizeAES128; void *buffer = malloc(bufferSize); size_t numBytesDecrypted = 0; CCCryptorStatus cryptStatus = CCCrypt(kCCDecrypt, kCCAlgorithmAES128, kCCOptionPKCS7Padding, keyPtr, kCCKeySizeAES256, NULL, [objDataObject bytes], dataLength, [objDataObject bytes], bufferSize, &numBytesDecrypted); if (cryptStatus == kCCSuccess) { return [NSData dataWithBytesNoCopy:objDataObject length:numBytesDecrypted]; } } i call the above methods like this: CryptoClass *obj = [CryptoClass new]; NSData *objData = [obj AES256EncryptWithKey:@"hell"]; NSData *val = [obj AES256DecryptWithKey:@"hell" andForData:objData ]; NSLog(@"decrypted string is : %@ AND LENGTH IS %d",[val description],[val length]); Decryption doesnt seem to happen at all and the encryption - i am not sure about it, kindly help me here.

    Read the article

  • How to localize an app on Google App Engine?

    - by Petri Pennanen
    What options are there for localizing an app on Google App Engine? How do you do it using Webapp, Django, web2py or [insert framework here]. 1. Readable URLs and entity key names Readable URLs are good for usability and search engine optimization (Stack Overflow is a good example on how to do it). On Google App Engine, key based queries are recommended for performance reasons. It follows that it is good practice to use the entity key name in the URL, so that the entity can be fetched from the datastore as quickly as possible. Currently I use the function below to create key names: import re import unicodedata def urlify(unicode_string): """Translates latin1 unicode strings to url friendly ASCII. Converts accented latin1 characters to their non-accented ASCII counterparts, converts to lowercase, converts spaces to hyphens and removes all characters that are not alphanumeric ASCII. Arguments unicode_string: Unicode encoded string. Returns String consisting of alphanumeric (ASCII) characters and hyphens. """ str = unicodedata.normalize('NFKD', unicode_string).encode('ASCII', 'ignore') str = re.sub('[^\w\s-]', '', str).strip().lower() return re.sub('[-\s]+', '-', str) This works fine for English and Swedish, however it will fail for non-western scripts and remove letters from some western ones (like Norwegian and Danish with their œ and ø). Can anyone suggest a method that works with more languages? 2. Translating templates Does Django internationalization and localization work on Google App Engine? Are there any extra steps that must be performed? Is it possible to use Django i18n and l10n for Django templates while using Webapp? The Jinja2 template language provides integration with Babel. How well does this work, in your experience? What options are avilable for your chosen template language? 3. Translated datastore content When serving content from (or storing it to) the datastore: Is there a better way than getting the *accept_language* parameter from the HTTP request and matching this with a language property that you have set with each entity?

    Read the article

  • Device Manager - does USB listing look right?

    - by Carl
    I obtained the drivers from the manufacturer for my HT-Link NEC USB 2.0 2-port Cardbus card. When I plugged in the card before I got the drivers, 3 new entries showed up in the Device Manager - two "NEC PCI to USB Open Host Controller" and one "Standard Enhanced PCI to USB Host controller." With the card plugged in, I uninstalled those two drivers. I then removed the card. I copied the new drivers to c:\windows\system32\drivers and the .inf file to c:\windows\inf. I also copied the drivers & inf to a new directory called c:\windows\drivers\ousb2. I reinserted the card. Windows automatically installed the same drivers as before. I selected 'update driver' on the "NEC PCI to USB..." entry and didn't see any other options. I then selected 'have disk' and pointed to c:\windows\drivers\ousb2 and got a message "The specified location does not contain information about your hardware." I then selected 'update driver' on the "Standard Enhanced PCI to USB...," and manually selected "USB 2.0 Enhanced Host Controller" (OWC 4/15/2003 2.1.3.1). Windows then automatically found a USB root hub, and I manually selected "USB 2.0 Root Hub Device" (OWC 4/15/2003 2.1.3.1). Now there are two sections in the Device Manager titled "Universal Serial Bus controllers." I plugged in my external USB hard disk adapter, and "USB Mass Storage Device" was added to the first set. Here's how it looks (w/drivers from the properties): [Universal Serial Bus controllers] Intel(R) 82801DB/DBM USB 2.0 Enhanced Host Controller - 24CD (6/1/2002 5.1.2600.0) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C2 (7/1/2001 5.1.2600.5512) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C4 (7/1/2001 5.1.2600.5512) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C7 (7/1/2001 5.1.2600.5512) NEC PCI to USB Open Host Controller (7/1/2001 5.1.2600.5512) NEC PCI to USB Open Host Controller (7/1/2001 5.1.2600.5512) USB Mass Storage Device USB Root Hub (7/1/2001 5.1.2600.5512) (5 more USB Root Hubs - same driver) [Universal Serial Bus controllers] USB 2.0 Enhanced Host Controller (OWC 4/15/2003 2.1.3.1) USB 2.0 Root Hub Device (OWC 4/15/2003 2.1.3.1) When I unplug the card the two "NEC PCI to USB..." entries in the first set disappear, and the whole second set disappears. (I unplugged the hard disk adapter first...) The hard disk adapter still doesn't work in that Cardbus card with the new drivers. I don't think the above looks right - a second set of USB controllers listed in the Device Manager, and the NEC entries still in the first set, and the the USB mass storage device still in the first set. Any help appreciated. (Windows XP PRO SP3 w/all current updates.)

    Read the article

  • Are FC and SAS DAS devices standard enough?

    - by user222182
    Before I ask my questions, here is some background info that may or may not be useful: For the first time I find myself needing a DAS solution. My priority is data through-put in a single direction. I can write large blocks, and I don't need to read at the same time. The server (the data producing device) is not really a typical server, its a very powerful single board computer. As such I have limited options when it comes to the add-in cards I can install since it must use the fairly uncommon interface, XMC. Currently I believe I am limited PCIex8 gen 1 which means that the likely bottle neck for me will be this 16gbps connection. XMC Boards I have found so far offer the following connections: a) Dual 10GBE ethernet controller, total throughput 20gbps b) Dual Quad SAS 2.0 Connectors (SFF-8XXX) HBA (no raid), total throughput 48 gbps c) Dual FC 8gb HBA (no raid), total throughput 16gbps My questions for you guys are: 1) Are SAS and/or FC, and by extension their HBAs, standard enough that I could purchase a Dell or Aberdeen storage server with a raid controller that has external SAS or FC ports and expect that I can connect it to my SAS or FC HBA, be presented with a single volume (if I so configured the storage server), all without having to check for HBA compatibility? 2) On a device like a Dell PowerVault (either DAS or NAS) is there an OS on it to concern myself with, or is it meant to be remotely managed? Is there a local interface in case I cant remotely manage it (i.e. if my single board computer uses an OS not supported by Dell OpenManage). Would this be true of nearly any device which calls itself a DAS? 3) If I purchase some sort of Supermicro storage chassis, installed a raid controller with external connections, is there a nice lightweight OS I can run just for management of the controller? Would I even need an OS since the raid card would be configured pre-boot anyway? 4) It is much easier to buy XMC based 10gigabit ethernet cards (generally dual port). In what ways would I be getting into trouble by using iSCSI as a DAS are direct cabling with SFP+ cables? Thanks in advance

    Read the article

  • connecting to bsnl wll, with huawei fixed wireless modem with connected to usb port on linux mint.

    - by Rakesh
    On windows, when I plugged in the usb connector, to the device was recognized as a mass storage device and installed all the drivers, to my computer, then the device, automatically restarted and started behaving like a modem, I'm writing this post, using the same modem, through windows... On linux mint (debian kernel, similar to ubuntu 8.10), the device is recognized as a mass storage device, some times, but, has no programs, useful for linux... When I use the modem on windows and restart the computer and login to linux, the device shows up as "not yet configured" in the terminal command "lsusb"... I googled a lot for solutions, tried many things, atlast, configured it and run the command "wvdial". But, get the error, that the modem is not responding, at last ! :'( Please help me out... Many more people are facing this problem, as I could discover, when I googled for it. this is the website address of huawei, the company of my modem... "www.huawei.com" Specification: model: ETS1220 FEQ: 800M Thank you.

    Read the article

  • Cache of Objects or OutPut in View ? Wich is better ?

    - by Felipe
    Hi everybody, I have an ecommerce working in ASP.Net MVC. i'm using Caching to improve more performace in my pages and it's working fine. I'd link to know what is more performative, for example, I can set OutPutCache in my views and and use this cache for all page OR I could get my List of Products in controller, put it on cache (like the code below) and send it to View to render for the user??? private IEnumerable<Products> GetProductsCache(string key, ProductType type) { if (HttpContext.Cache[key] == null) HttpContext.Cache.Insert(key, ProductRepository.GetProducts(type), null, DateTime.Now.AddMinutes(10), Cache.NoSlidingExpiration); return (IEnumerable<Products>)HttpContext.Cache[key]; } public ActionResult Index() { var home = new HomeViewModel() { Products = GetProductsCache("ProductHomeCache", ProductType.Product) Services = GetProductsCache("ServiceHomeCache", ProductType.Service) }; return View(home); } Both works fine, but I'd like to know what is suggested to improve more performace ? Or is there others way to do it better ? PS: sorry for my english! thanks all... Cheers

    Read the article

  • NETAPP Fragmentation

    - by mdpc
    We all know that once a disk (or storage system for that matter) gets introduced into use, the performance degrades due to fragmentation of files. This seems to be why disk defragmentors are in fairly wide use on Windows boxes. And they do increase the performance substancially. As an aside, I haven't heard of many defragmentors in the Unix/Linux area. Despite the claimed WAFL protections for the NetApp, file fragmentation still will occur, especially with all the sparsely crated VMs. My question is does anybody do any sort of defragmention of such a storage system? Do you notice any measurable degration/improvement of either not doing/doing anything to address this situation? Does anybody do anything about it? If so what? Thanks

    Read the article

  • Best practice for assigning private IP ranges?

    - by Tauren
    Is it common practice to use certain private IP address ranges for certain purposes? I'm starting to look into setting up virtualization systems and storage servers. Each system has two NICs, one for public network access, and one for internal management and storage access. Is it common for businesses to use certain ranges for certain purposes? If so, what are these ranges and purposes? Or does everyone do it differently? I just don't want to do it completely differently from what is standard practice in order to simplify things for new hires, etc.

    Read the article

  • DNSSEC - Ad Flag not activated

    - by Arancha
    Hi all, I have some doubts regarding DNSSEC. I have one server acting as an Authoritative Name Server and another one as a Cache/Resolver. I'm using Bind 9.7.1-P2 and these are my configuration files: Named.conf (Authoritative Server) // Opciones de configuracion del servidor include "/etc/rndc.key"; controls { inet 127.0.0.1 allow { localhost; } keys { rndc-key; }; }; options{ version "Peticion no permitida/Query not allowed"; hostname "Peticion no permitida/Query not allowed"; server-id "Peticion no permitida/Query not allowed"; directory "/etc/DNS_RIMA"; pid-file "named.pid"; notify yes; #files 65535; dnssec-enable yes; dnssec-validation yes; allow-transfer { 172.23.2.37; 172.23.3.39; }; transfer-format many-answers; transfers-per-ns 5; transfers-in 10; max-transfer-time-in 120; check-names master ignore; listen-on {172.23.2.57; 80.58.102.13; 80.58.102.103; 127.0.0.1; }; }; zone "test.dnssec" { type master; key-directory "keys"; file "db.test.dnssec.signed"; also-notify { 172.23.2.37 ; 172.23.3.39 ; }; allow-transfer { 172.23.2.37 ; 172.23.3.39 ; }; }; test.dnssec zone test.dnssec. 86400 IN SOA ns.test.dnssec. mxadmin.test.dnssec. ( 2010090902 ; serial 21600 ; refresh (6 hours) 3600 ; retry (1 hour) 1814400 ; expire (3 weeks) 172800 ; minimum (2 days) ) 86400 RRSIG SOA 5 2 86400 20101009062248 ( 20100909062248 40665 test.dnssec. eY99laB6PrtETaXLdCS+G8Uq1lIK7d5vxUB1 pAQ9npv/YbvX1pdWZKGojDgPGw8V65Q0zKQo YW1VuBzvwfSRKax+yrjJzvHQGfCZPJWARehK hgLxHOfXLVH7tyndvLD49ZKcWtrop+Tuy4n9 apWWfSJZxCOngwS7zUi0zCTKfPs= ) 86400 NS ns1.test.dnssec. 86400 RRSIG NS 5 2 86400 20101009062248 ( 20100909062248 40665 test.dnssec. lmlP/Mb2qEXPSlajgSDn/CqWk/jokVCmqjeo idNuytxbiFnbCOunzvaYpgvDpEr0CPrwXaDL TSnb/w53tZl7GHRImJo50vwwNZljLzNT6CFw aaQXFc3rDLsXjCi+WF0/Z7meteM4jYdx5nrV Qx9pgur7VPbP88bJOqWCPBev2Ho= ) 172800 NSEC a.test.dnssec. NS SOA RRSIG NSEC DNSKEY 172800 RRSIG NSEC 5 2 172800 20101009062248 ( 20100909062248 40665 test.dnssec. E76ayamsAAz8Zcj7060KY0nTFzHPztM/Pkc5 OM0EcP7C5+ocn4L8M2J0rmR3jxfYvCpOk0BQ Zniqn9Aw41Qk068yJ2dfDPwV5zT0+te0nzwC /awJGPMXLzMj4JejYTlTiKfspGDJCG44F+lb lHXdcUhbjXf3loqMQadZFQ/eSn0= ) 86400 DNSKEY 256 3 5 ( AwEAAbQ8qrNN5vetx/7E1VOgXZ7fLqwG1y/i 55hWGCeLbcS95ratT9A6UospOvPSwPTlrFgF RWP67Pubzbsy7/damS1F1+p4GgBQway52Hd1 8HjdHKKC6kIxna9pOJBRfhCdzAsv9LnpRvrw mDpcFAqhdn5k5RqwcUF1eOZrKjxXjAOr ) ; key id = 40665 86400 DNSKEY 257 3 5 ( AwEAAcd4dxWyTgOuqha0DJADUH0pk5jvnwdM ZhgZaqnayUdeTh8U9WOjOUHdVCGywZS6NTVp xXqhcegWzh2ZR5VN6thuhezt7kbzLNWbPe7m YF29/ZTXB6nmdSxruQlSvYhzkWTaPNtfrUnI UlbDRxUFWQkSHj9LA1TG76FpR6uqOj1sNrWX nPb/Hwp1Sb2Ik4FlifKb/Vu1+/UnclRJgfPm p2HGTeNYpfk15JHBPSYxJ1TuedXQIdkPGlQX ISmAeV1evGomCC/x9DNleDHCszJOptwurzRP Z7wRXcWnbXz1BU8rAqvUZL3M4UgdNRR5LLTz CkRnrlvXYJpgzDtgmQxE9Bs= ) ; key id = 59647 86400 RRSIG DNSKEY 5 2 86400 20101009062248 ( 20100909062248 40665 test.dnssec. sa4W3tvl6n0TkIcq3xzhG17C2O0lRhllrpUd n5Hs6yVo8r7stewP6tm2XscQiAeseDgmv28w s6Mtiz8uPUbrgFRb6SJk7coH2n/2Y3//S9YP NldDFv3luPnnU1TBb3jDsBKIZWHU9yl/cLNA OKUhlMDd40txk+fQi3iiV5Ls9K8= ) 86400 RRSIG DNSKEY 5 2 86400 20101009062248 ( 20100909062248 59647 test.dnssec. b5fz0dEp2co2pVO7biY896XmsJanjQIR69vC MvSF104/9iZk6eGVFi6hsa4aZcXutEjUDESB ynPkDjMWWIIhN6K1jYKGIc/sFKv1IUONRYHF KXGgZhC6aI0B1E4NA9AXLjlBVF60nHdc3iw8 5gTLDjypP3qAZrnzMvdiBopLnVdB25UZYKn8 mGpOuzKqX02TGMCFMlEVtMX4FP/XKAE8UjiQ 5ehC1JvIKIyg/2zM+ot3nmcqqtUfzp/Hweyc aIkl/9wPJPwMedfTqOjfUKFdB+GiZ0Zz16HZ 5MfJui5IGh5Y6Q04kMrnap2V5U7mByTzx/ud V/eFYhmSHGtAXzBjMA== ) a.test.dnssec. 86400 IN A 1.1.1.1 86400 RRSIG A 5 3 86400 20101009062248 ( 20100909062248 40665 test.dnssec. P52N9ypCrYsgS4CFcUmII0xjyE6KNL9ndhzH oU63fHJHQHeQV+fc0Rx8cCmZSzuqk1lSBelV 3Gcl9UNNuCAQ4ORQ/yJkiZ1zn7h93Mep9qsg YEUQJMfk4FLjYW67DHNcuoCnKbDJhZS0ndVf I474k7ZEZJsGslwk/vcIoFnTa4o= ) 172800 NSEC b.test.dnssec. A RRSIG NSEC 172800 RRSIG NSEC 5 3 172800 20101009062248 ( 20100909062248 40665 test.dnssec. TCduf7xPSrWvEAzBO7Kx5haR85yA/lbsswkQ v0QxlskqAqo+9YedGQV+wGblbCIOmkomrYcq u/rXQ5yoQ3SDXd/bw6EFdoQmH8UJOjMc7SdR xY93MjawPB6XXlJsSlbBFPWJwEpILVRhdBFX czdS5VCa1KmhAYZYQp1FY9rMelA= ) b.test.dnssec. 86400 IN A 2.2.2.2 86400 RRSIG A 5 3 86400 20101009062248 ( 20100909062248 40665 test.dnssec. f0M6Tcqe6B09ctaN3BGAit4u4cJE8x3Ik8sh gyMu0GN/lMv/Bo7PB6hgylLam3HXtF1pPAzX oYudXmhU8afPapHMXfUitC1lFQB5ZW052ZC7 JXV9MnGULydz1blj2EdN+JL3Za8SJKM0LrLB XdQ+QUV+A/6N7hUV6usz5YmdBeI= ) 172800 NSEC ns1.test.dnssec. A RRSIG NSEC 172800 RRSIG NSEC 5 3 172800 20101009062248 ( 20100909062248 40665 test.dnssec. sc6v19dcOFVa295/Xf1pKxBhbdpEErY8CTDQ fw2fjJf0Y3wL1Y1Mlr5zi5ShceQwgua+6YHE DWNbAPcXrJ0lLMU4DU5r0sAyBiBCgCavngGk i59W+nv11zuIpPMnlaMHpJVfJrQ+c4z7H9MH 77B0fMRFTUnvAXoq6ag8Q5POITI= ) ns1.test.dnssec. 86400 IN A 3.3.3.3 86400 RRSIG A 5 3 86400 20101009062248 ( 20100909062248 40665 test.dnssec. UQ3hR/++ta1GokxGz8Yh+GomMcA+xhd3z2Ke z0tdFiNfxvGbm85XyCtSqJIo2S/ZLVJUv/mG nGJbicTfJSziKzYZsD7dp0WJiUK3l7lQ/HpP 5FL8SbjlovVYYAG5woW4p3+os28mmCAJA8gP JTywbcREEhFB4cir2M/QVP+9h+Y= ) 172800 NSEC test.dnssec. A RRSIG NSEC 172800 RRSIG NSEC 5 3 172800 20101009062248 ( 20100909062248 40665 test.dnssec. i7F/ezGl/pGXCC6JyVDaxuwdZMAgv9QLxwzi PTgjCG8Sj6pTIxaQkSLwXsoB9gF77WWBANow R2SWdz0Zai2vWnv/NYoNm9ZfRJEQ9NuExeYp rvX/+lLOHvZXN6tUerIQbWAxO2GwdzHoejSn wReUNVr9MxzZUvuJ33Z7X/7s9VQ= ) Named.conf (Cache/Resolver) include "/etc/rndc.key"; controls { inet 127.0.0.1 allow { localhost; } keys { rndc-key; }; }; options{ version "Peticion no permitida/Query not allowed"; hostname "Peticion no permitida/Query not allowed"; server-id "Peticion no permitida/Query not allowed"; directory "/etc/DNS_RIMA"; pid-file "named.pid"; recursion yes; notify no; #DNSSEC dnssec-enable yes; dnssec-validation yes; listen-on {127.0.0.1; 172.23.2.87; 80.58.102.37; 80.58.102.115; }; #listen-on {127.0.0.1; 80.58.102.37; 80.58.102.115; }; allow-query { telefonica; }; allow-transfer { none; }; recursive-clients 40000; max-cache-size 838860800; rrset-order { order fixed;}; max-ncache-ttl 600; }; trusted-keys { "test.dnssec." 257 3 5 "AwEAAcd4dxWyTgOuqha0DJADUH0pk5jvnwdMZhgZaqnayUdeTh8U9WOjOUHdVCGywZS6NTVpxXqhcegWzh2ZR5VN6thuhezt7kbzLNWbPe7mYF29/ZT XB6nmdSxruQlSvYhzkWTaPNtfrUnIUlbDRxUFWQkSHj9LA1TG76FpR6uqOj1sNrWXnPb/Hwp1Sb2Ik4FlifKb/Vu1+/UnclRJgfPmp2HGTeNYpfk15JHBPSYxJ1TuedXQIdkPGlQXIS mAeV1evGomCC/x9DNleDHCszJOptwurzRPZ7wRXcWnbXz1BU8rAqvUZL3M4UgdNRR5LLTzCkRnrlvXYJpgzDtgmQxE9Bs="; }; I have configured a secure zone (test.dnssec) and I'm trying to perform some queries from the resolver to the Name server (172.23.2.57): /usr/local/bin/dig @172.23.2.57 a.test.dnssec +dnssec ; <<>> DiG 9.7.1-P2 <<>> @172.23.2.57 a.test.dnssec +dnssec ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2654 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 2, ADDITIONAL: 3 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags: do; udp: 4096 ;; QUESTION SECTION: ;a.test.dnssec. IN A ;; ANSWER SECTION: a.test.dnssec. 86400 IN A 1.1.1.1 a.test.dnssec. 86400 IN RRSIG A 5 3 86400 20101009062248 20100909062248 40665 test.dnssec. P52N9ypCrYsgS4CFcUmII0xjyE6KNL9ndhzHoU63fHJHQHeQV+ fc0Rx8 cCmZSzuqk1lSBelV3Gcl9UNNuCAQ4ORQ/yJkiZ1zn7h93Mep9qsgYEUQ JMfk4FLjYW67DHNcuoCnKbDJhZS0ndVfI474k7ZEZJsGslwk/vcIoFnT a4o= ;; AUTHORITY SECTION: test.dnssec. 86400 IN NS ns1.test.dnssec. test.dnssec. 86400 IN RRSIG NS 5 2 86400 20101009062248 20100909062248 40665 test.dnssec. lmlP/Mb2qEXPSlajgSDn/CqWk/jokVCmqjeoidNuytxbiFnbCOunzvaY pgvDpEr0CPrwXaDLTSnb/w53tZl7GHRImJo50vwwNZljLzNT6CFwaaQX Fc3rDLsXjCi+WF0/Z7meteM4jYdx5nrVQx9pgur7VPbP88bJOqWCPBev 2Ho= ;; ADDITIONAL SECTION: ns1.test.dnssec. 86400 IN A 3.3.3.3 ns1.test.dnssec. 86400 IN RRSIG A 5 3 86400 20101009062248 20100909062248 40665 test.dnssec. UQ3hR/++ta1GokxGz8Yh+GomMcA+xhd3z2Kez0tdFiNfxvGbm85XyCtS qJIo2S/ZLVJUv/mGnGJbicTfJSziKzYZsD7dp0WJiUK3l7lQ/HpP5FL8 SbjlovVYYAG5woW4p3+os28mmCAJA8gPJTywbcREEhFB4cir2M/QVP+9 h+Y= ;; Query time: 1 msec ;; SERVER: 172.23.2.57#53(172.23.2.57) ;; WHEN: Thu Sep 9 09:47:14 2010 ;; MSG SIZE rcvd: 605 I obtain the right answer along with the RRSIG records, but the problem is that I'm not seeing the ad flag activated. Any idea about what is wrong????

    Read the article

  • Feedback on Using ZFS and FreeBSD

    - by ToiletOverflow
    I need to create a server that will be used solely for backing up files. The server will have 2TB of storage to begin with but I may want to add additional storage later on. As such, I am currently considering using FreeBSD + ZFS as the OS and file system. Is ZFS a reliable, trusted file system? Should I use it in this scenario? I have read that ZFS should be used with OpenSolaris over FreeBSD as OpenSolaris is usually ahead of the curve with ZFS as far as version updates and stability. However, I am not interested in using OpenSolaris for this project. An alternative option that I am considering is to stick with ext3 and create multiple volumes if need be, because I know that I will not need a single, continuous volume larger than 2TB. Thanks in advance for your feedback.

    Read the article

  • Facebook Connect - Using both FBML and PHP Client

    - by Ryan
    I am doing my second Facebook connect site. On my first site, I used the facebook coonect FBML to sign a user in, then I could access other information via the PHP Client. With this site, using the same exact code, it doesn't seem to be working. I'd like someone to be able to sign in using the Facebook button: <fb:login-button length="long" size="medium" onlogin="facebook_onlogin();"></fb:login-button> After they have logged in, the facebook_onlogin() function makes some AJAX requests to a PHP page which attempts to use the Facebook PHP Client. Here's the code from that page: $facebook = new Facebook('xxxxx API KEY xxxxx', 'xxxxx SECRET KEY xxxxx'); //$facebook->require_login(); ** I DONT WANT THIS LINE!! $userID = $facebook->api_client->users_getLoggedInUser(); if($userID) { $user_details = $facebook->api_client->users_getInfo($userID, 'last_name, first_name'); $firstName = $user_details[0]['first_name']; $lastName = $user_details[0]['last_name']; $this->view->userID = $userID; $this->view->firstName = $firstName; $this->view->lastName = $lastName; } The problem is that the PHP Client thinks I don't have a session key, so the $userID never get set. If I override the $userID with my facebook ID (which I am logged in with) I get this error: exception 'FacebookRestClientException' with message 'A session key is required for calling this method' in C:\wamp\www\mab\application\Facebook\facebookapi_php5_restlib.php:3003 If I uncomment the require_login(), it will get a session ID, but it redirects pages around a lot and breaks my AJAX calls. I am using the same code I successfully did this with on another site. Anyone have any ideas why the PHP client don't know about the session ID after a user has logged in with the connect button? Any ideas would be greatly appreciated. Thanks!

    Read the article

  • Setcookie > sniff > output on same page

    - by lokust
    Hi, I wonder if someone can help shed some light on this: I drop a cookie if a user arrives to the site with a specific key/value in query string. i.e.: http://www.somesite.com?key=hmm01 The cookie code exists at top of the template before <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML ) : <?php header("Content-Type: text/html; charset=utf-8"); ob_start(); if (isset($_GET['key'])) { setcookie("cookname", $_GET['key'], time()+2592000); /* Expires in a month */ } ob_end_flush(); ?> On the same page though within the : I have the following php code that sniffs the cookie and outputs some text: ` switch ($cookievalue) { case hmm01: echo "abc"; break; case hmm02: echo "def"; break; case hmm03: echo "ghi"; break; default: echo "hello"; } ?` -- Problem is when the user first arrives the sniffer script doesn't detect the cookie and outputs the default text: hello Only when user refreshes page or navigates to a different page does the sniffer detect the cookie. Any ideas on how to drop the cookie and output the correct text without a page refresh? Many thanks.

    Read the article

  • Cassandra random read speed

    - by Jody Powlette
    We're still evaluating Cassandra for our data store. As a very simple test, I inserted a value for 4 columns into the Keyspace1/Standard1 column family on my local machine amounting to about 100 bytes of data. Then I read it back as fast as I could by row key. I can read it back at 160,000/second. Great. Then I put in a million similar records all with keys in the form of X.Y where X in (1..10) and Y in (1..100,000) and I queried for a random record. Performance fell to 26,000 queries per second. This is still well above the number of queries we need to support (about 1,500/sec) Finally I put ten million records in from 1.1 up through 10.1000000 and randomly queried for one of the 10 million records. Performance is abysmal at 60 queries per second and my disk is thrashing around like crazy. I also verified that if I ask for a subset of the data, say the 1,000 records between 3,000,000 and 3,001,000, it returns slowly at first and then as they cache, it speeds right up to 20,000 queries per second and my disk stops going crazy. I've read all over that people are storing billions of records in Cassandra and fetching them at 5-6k per second, but I can't get anywhere near that with only 10mil records. Any idea what I'm doing wrong? Is there some setting I need to change from the defaults? I'm on an overclocked Core i7 box with 6gigs of ram so I don't think it's the machine. Here's my code to fetch records which I'm spawning into 8 threads to ask for one value from one column via row key: ColumnPath cp = new ColumnPath(); cp.Column_family = "Standard1"; cp.Column = utf8Encoding.GetBytes("site"); string key = (1+sRand.Next(9)) + "." + (1+sRand.Next(1000000)); ColumnOrSuperColumn logline = client.get("Keyspace1", key, cp, ConsistencyLevel.ONE); Thanks for any insights

    Read the article

  • how to pipe data to sftp connection?

    - by JMW
    ftp supports the put "|..." "remote-file.name" command to pipe data to an ftp connection. Is there something similar available for sftp? In sftp i get the following error: sftp 'jmw@backupsrv:/uploads' sftp> put "| tar -cx /storage" "backup-2012-06-19--17-51.tgz" stat | tar -cv /storage: No such file or directory as above the sftp client doesn't obviously execute the command. i want to use the pipe command to directly redirect the file stream to sftp. (because there is not enough space left to create a backup file on the same disk before uploading it to sftp server.)

    Read the article

  • Windows Server 2003 R2 sp2 and Exchange 2003 - missing pop-up menu "Exchange Tasks"

    - by Denis
    I need to recover database. That's what i doing step-by-step: - In Exchange System Manager check the server - Create new Recovery Storage Group - Add Database to recover - Mount store for it Database (Mailbox Store) - all finish successful Next step - I need check user and in pop-up menu click on "Exchange Tasks...", but in menu i see only "Help". Main question - why I have not "Exchange Tasks" and how I can get it? But I can see "Exchange Tasks" in "First storage Group"-Mailbox-User. Sorry for my bad English. Thanks, Denis

    Read the article

  • What does "active directory integration" mean in your .NET app?

    - by flipdoubt
    Our marketing department comes back with "active directory integration" being a key customer request, but our company does not seem to have the attention span to (1) decide on what functional changes we want to make toward this end, (2) interview a broad range of customer to identify the most requested functional changes, and (3) still have this be the "hot potato" issue next week. To help me get beyond the broad topic of "active directory integration," what does it mean in your .NET app, both ASP.NET and WinForms? Here are some sample changes I have to consider: When creating and managing users in your app, are administrators presented with a list of all AD users or just a group of AD users? When creating new security groups within your app (we call them Departments, like "Human Resources"), should this create new AD groups? Do administrators assign users to security groups within your app or outside via AD? Does it matter? Is the user signed on to your app by virtue of being signed on to Windows? If not, do you track users with your own user table and some kind of foreign key into AD? What foreign key do you use to link app users to AD users? Do you have to prove your login process protects user passwords? What foreign key do you use to link app security groups to AD security groups? If you have a WinForms component to your app (we have both ASP.NET and WinForms), do you use the Membership Provider in your WinForms app? Currently, our Membership and Role management predates the framework's version, so we do not use the Membership Provider. Am I missing any other areas of functional changes? Followup question Do apps that support "active directory integration" have the ability to authenticate users against more than one domain? Not that one user would authenticate to more than one domain but that different users of the same system would authenticate against different domains.

    Read the article

  • registration 0.8 alpha activation problem

    - by craphunter
    Got the following error: Exception Type: TypeError at /accounts/account/activate/success/ Exception Value: activate() takes at least 2 non-keyword arguments (1 given) My view: def activate(request, backend, template_name='registration/activation_complete.html', success_url=None, extra_context=None, **kwargs): backend = get_backend(backend) account = backend.activate(request, **kwargs) if account: if success_url is None: to, args, kwargs = backend.post_activation_redirect(request, account) return redirect(to, *args, **kwargs) else: return redirect(success_url) if extra_context is None: extra_context = {} context = RequestContext(request) for key, value in extra_context.items(): context[key] = callable(value) and value() or value return render_to_response(template_name, kwargs, context_instance=context) My url: urlpatterns = patterns('', url(r'^activate/complete/$', direct_to_template, { 'template': 'registration/activation_complete.html' }, name='registration_activation_complete'), # Activation keys get matched by \w+ instead of the more specific # [a-fA-F0-9]{40} because a bad activation key should still get to the view; # that way it can return a sensible "invalid key" message instead of a # confusing 404. url(r'^activate/(?P<activation_key>\w+)/$', activate, { 'backend': 'registration.backends.default.DefaultBackend' }, name='registration_activate'), url(r'^register/$', register, { 'backend': 'registration.backends.default.DefaultBackend' }, name='registration_register'), url(r'^register/complete/$', direct_to_template, { 'template': 'registration/registration_complete.html' }, name='registration_complete'), url(r'^register/closed/$', direct_to_template, { 'template': 'registration/registration_closed.html' }, name='registration_disallowed'), (r'', include('registration.auth_urls')), url(r'^account/activate/(?P<activation_key>\w+)/$', 'registration.views.activate', {'success_url': 'account/activate/success/'}, name='registration_activate2'), url(r'^account/activate/success/$', direct_to_template, {'template': 'registration/activation_complete.html'}, name='registration_activation_complete'), ) What do I do wrong? Thanks!

    Read the article

  • Setting up an Active-Active IIS Cluster with ARR - is it possible?

    - by Ahmed Zubair
    I would like to know if we can setup an Active-Active IIS Cluster using Windows Cluster services that shares a common storage to store web content and WITHOUT the use of Windows NLB. I'm aware that this may not be a best practice or not a recommended setup, however, the setup is to be configured as below: Two web servers running IIS 7.5 (needs a common storage for web content) for HA and another set of two servers for sql cluster in active-passive mode for HA. Also is it possible to enable ARR on 2 node active-active IIS cluster for load balancing http requests? Appreciate if someone replies with both pros & cons of the setup.

    Read the article

  • Changing default datadirectory to one on a External NAS or add external share

    - by Hagbart Celine
    So I have been searching the web for days, looking for a solution to my problem. Now my only hope are you guys. I have installed Owncloud successfully on my Windows Server 2008R2. It all runs smoothly and I can connect without problems. So first checks are OK. Now I wanted to change the default data directory from my server to a shared folder on my NAS (Synology DS1813+, DSM 5.0-4493 Update 3). Tried following: changing the directory in config.php I changed the path in the config file from : "C:\inetpub\wwwroot\myfolder\data" to "\NASIP\cloud". by doing this the owncloud server only shows: Code: Select all Daten-Verzeichnis (\192.168.2.4\Cloud\data) ist ungültig Bitte stelle sicher, dass das Daten-Verzeichnis eine Datei namens ".ocdata" im Wurzelverzeichnis enthält. I also tried coping the files that were created in the local data storage, to the share on the NAS. No Luck. Now I tried it by mapping a network drive and using that in the config.php But still no luck. I get the same message with the missing .ocdata file. Now I tried the "External Storage APP" that comes with owncloud I thought that at least I could add the share as an external storage. But this also does not work. tried UNC, Mapped Drive Name (Z:) but nothing helped. So now I'm turning to you.. Does anyone have expirience with this kind of setup? Or can you even tell me how to make it work? (default or external storgae, I don't care anymore ) Using NAS (Synology DS1813+, DSM 5.0-4493 Update 3), Owncloud 7, Windows Server 2008 R2, IIS7 I got an answer on an other forum: The second option is how it should be done: 1. Put OC in maintenance mode 2. Mount (mapping in the windows world) your NAS directly to your OS 3. Copy the local data directory to the NAS mount 4. Ensure the permission is setup to give the web user access to the NAS mount 5. Update OC config.php with the new data path 6. Disable OC maintenance mode And this seems like the right way.. Ensure the permission is setup to give the web user access to the NAS mount I guess this is where I am not sure. What user is it exactly on my Server that is making the requests to the NAS? If the user is for example "IUSR" I can just create an account on my synology NAS and give him full access to my share? (But what is IUSRs password?) I have full root ssh access to my NAS, so if you can tell me what chmod or chown I need to use on my cloud folder...

    Read the article

  • New standalone ESXi 5 deployments - USB versus SD card?

    - by ewwhite
    Now that the old full VMWare ESX with service console is no longer, I'm redeploying some standalone ESXi servers. I'm using HP ProLiant ML and DL G6 and G7 servers. Does it make more sense to utilize the internal USB port for ESXi or the internal SD card slot? I'm using the HP ESXi 5 build, but am not sure what the recommended practice is. Any recommendations on cards/USB drives for this purpose? BTW - these will be all-in-one storage servers with the onboard disk storage presented via PCIe passthrough.

    Read the article

  • Persist changes in C

    - by Mohit Deshpande
    I am developing a database-like application that stores a a structure containing: struct Dictionary { char *key; char *value; struct Dictionary *next; }; As you can see, I am using a linked list to store information. But the problem begins when the user exits out of the program. I want the information to be stored somewhere. So I was thinking of storing the linked list in a permanent or temporary file using fopen, then, when the user starts the program, retrieve the linked list. Here is the method that prints the linked list to the console: void PrintList() { int count = 0; struct Dictionary *current; current = head; if (current == NULL) { printf("\nThe list is empty!"); return; } printf(" Key \t Value\n"); printf(" ======== \t ========\n"); while (current != NULL) { count++; printf("%d. %s \t %s\n", count, current->key, current->value); current = current->next; } } So I am thinking of modifying this method to print the information through fprintf instead of printf and then the program would just get the infomation from the file. Could someone help me on how I can read and write to this file? What kind of file should it be, temporary or regular? How should I format the file (like I was thinking of just having the key first, then the value, then a newline character)?

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >