Name Equal To Size(In Bytes) Bit 1 Bit 1/8 Nibble 4 Bits 1/2 (rare) Byte 8 Bits 1 Kilobyte 1024 Bytes 1024 Megabyte 1, 024 Kilobytes 1, 048, 576 Gigabyte 1, 024 Megabytes 1, 073, 741, 824 Terrabyte 1, 024 Gigabytes 1, 099, 511, 627, 776 Petabyte 1, 024 Terabytes 1, 125, 899, 906, 842, 624 Exabyte 1, 024 Petabytes 1, 152, 921, 504, 606, 846, 976 Zettabyte 1, 024 Exabytes 1, 180, 591, 620, 717, 411, 303, 424 Yottabyte 1, 024 Zettabytes 1, 208, 925, 819, 614, 629, 174, 706, 176 refer to File Sizes
根据配置 应用实例: /** * 默认 defaultManager ttl:1天 */ @Cacheable(cacheNames = "firstCache") public Data prepareCache(String key) { ... } /** * ttl:30天 */ @Cacheable(cacheNames = "SecondCache", cacheManager = "ttl30Days") public Data prepareCache(String key) { ... } /** * ttl:1小时 */ @Cacheable(cacheNames = "thirdCache", cacheManager = "ttlOneHour")
原因是hive版本太老,不能识别integer,只能识别int 官方说明 生效版本是0.8.0
用spark写csv的时候碰见一个问题,join后未匹配的单元应该是null,但是spark写出来全部都为"" F23338994668,F23338994669,F23338995220 12,1,1 1,7,"" 13,1,1 6,1,1
spark decimal列进行计算时,可能丢失精度 在默认情况下[spark.sql.decimalOperations.allowPrecision
利用RedisCacheConfiguration.defaultCacheConfig().computePrefixWith即可自定义c
实现方式是自定义触发器 drop trigger if exists trigger_name; delimiter | CREATE TRIGGER trigger_name BEFORE INSERT ON table_name FOR EACH ROW BEGIN declare original_column_name varchar(255); declare column_name_counter int; set original_column_name = new.column_name; set column_name_counter = 1; while exists (select true from pc_volumes where name = new.column_name) do set new.column_name = concat(original_column_name, '-', column_name_counter); set column_name_counter = column_name_counter + 1; end while; END; |